HiFAR: Multi-Stage Curriculum Learning for High-Dynamics Humanoid Fall Recovery

Penghui Chen1     Yushi Wang1     Changsheng Luo1     Wenhan Cai2     Mingguo Zhao1

1

        

2


Abstract

Humanoid robots encounter considerable difficulties in autonomously recovering from falls, especially within dynamic and unstructured environments. Conventional control methodologies are often inadequate in addressing the complexities associated with high-dimensional dynamics and the contact-rich nature of fall recovery. Meanwhile, reinforcement learning techniques are hindered by issues related to sparse rewards, intricate collision scenarios, and discrepancies between simulation and real-world applications.

In this study, we introduce a multi-stage curriculum learning framework, termed HiFAR. This framework employs a staged learning approach that progressively incorporates increasingly complex and high-dimensional recovery tasks, thereby facilitating the robot's acquisition of efficient and stable fall recovery strategies. Furthermore, it enables the robot to adapt its policy to effectively manage real-world fall incidents. We assess the efficacy of the proposed method using a real humanoid robot, showcasing its capability to autonomously recover from a diverse range of falls with high success rates, rapid recovery times, robustness, and generalization.


Experiments

Basic Fall Recovery

Prone recovery

Supine recovery

Lateral recovery


Initial State

Legs crossed

Legs crossed

Random lateral

Random lateral

Legs apart

Sit


Terrain

Slope supine

Slope prone

Ball under leg

Ball between legs

Gravel

Grass


Disturbance

5kg load

Push

Impact


Various Behavior

Push recovery and fall recovery

Walk after supine recovery

Walk after prone recovery


High Success Rate

20 successive prone fall recovery trials with 100% success rate

20 successive supine fall recovery trials with 100% success rate


BibTeX

@article{hifar2025,
      title={HiFAR: Multi-Stage Curriculum Learning for High-Dynamics Humanoid Fall Recovery},
      author={Chen, Penghui and Wang, Yushi and Luo, Changsheng and Cai, Wenhan and Zhao, Mingguo},
      journal={arXiv preprint arXiv:2502.20061},
      year={2025}
}