Figure AI Achieves Natural Humanoid Walking via Reinforcement Learning

- Figure trained a neural walking controller entirely in simulation using reinforcement learning.
- The resulting policy enables human-like walking and transfers directly to physical robots without adjustment.
Figure has introduced a new locomotion controller for its Figure 02 humanoid robot, developed entirely through reinforcement learning in high-fidelity simulation. The system teaches one walking controller by practicing on thousands of virtual robots with different body types and situations, so it learns to walk like a human in a wide range of conditions.
The walking controller mimics human gait characteristics such as heel strikes, toe offs, and synchronized arm movements, guided by reference trajectories and multi-objective reward functions. These include terms for velocity tracking, energy efficiency, and resilience to terrain changes and external forces.
To transition from simulation to real-world deployment, Figure applies domain randomization to account for hardware variability and uses closed-loop torque control to correct modeling inaccuracies. The same policy has been shown to operate uniformly across a fleet of 10 physical robots without modification, pointing toward scalable and consistent real-world performance.
🌀 Tom's Take:
Teaching robots to walk isn’t just about motion—it’s about encoding behaviors we take for granted as humans and doing it in a way that scales. Figure’s approach shows how simulation can turn basic capability into fleet-wide reliability.
Source: Figure AI Newsroom