Figure AI Achieves Natural Humanoid Walking via Reinforcement Learning

Figure AI Achieves Natural Humanoid Walking via Reinforcement Learning
Source: Figure AI

Figure AI has unveiled an end-to-end neural network trained using reinforcement learning (RL) to enable its humanoid robot, Figure 02, to walk with human-like gait patterns. By simulating years of data within hours in a high-fidelity physics simulator, the robot learned to perform heel-strikes, toe-offs, and synchronized arm swings, closely mimicking human locomotion.

The training involved thousands of virtual robots with varied physical parameters exposed to diverse scenarios, enhancing the robustness of the learned walking policy. This policy was then transferred directly to real-world Figure 02 robots without additional tuning, achieving consistent and natural walking behaviors across the fleet.

Watch:


🌀 Tom's Take:

Figure AI's approach exemplifies the potential of reinforcement learning in bridging the gap between simulated training and real-world robotic applications, marking a significant step toward versatile and human-like humanoid robots.​


Source: Figure AI Newsroom

© 2025 Remix Reality LLC. All rights reserved.