Multi-Terrain Humanoid Locomotion Challenge

Overview

This challenge focuses on developing robust locomotion controllers for humanoid robots to navigate complex, unstructured terrains - a critical capability for real-world deployment in homes, disaster zones, and natural environments.

Using the high-performance Isaac Gym simulator, participants will create locomotion policies that enable humanoids to traverse diverse surfaces including slopes, stairs and rubble terrain. The competition provides standardized robots and terrains, allowing you to focus on advanced locomotion strategies.

Competition Objective

Your goal is twofold:

  1. Maximize traversal distance - Go as far as possible on each terrain
  2. Minimize completion time - Achieve the same distance as fast as possible

The final score combines both distance coverage and efficiency metrics.

Participation

Registration

All participant teams need to register by emailing challenge-terrain@outlook.com with team information. You will recieve an unique token for evaluating your solution.

Registration template

Subject: [ICCV2025-HRSIC]Humanoid Locomotion Challenge Registration

Body:
Here is our registration information of [Team Name]:
1. Point of Contact: [Team Member 1], [Affiliation], [Email]
2. [Team Member 2], [Affiliation]
...
Whether to open-source the code (for awards): [Yes/No]
Additional comments: [Any additional information or questions]

If you have any questions, feel free to reach out! We may also send updates regarding the challenge to registered teams.

Submission

We innovatively utilize a client-server architecture for policy evaluation. The server hosts the simulated evaluation environments, while clients evaluate their policies via our provided REST API. The REST API is designed to have the same interface as gym, e.g., env.reset() and env.step(action). Thus no modifications are needed for existing gym-compatible policies. The results will be shown on the public leaderboard.

Please refer to our challenge repo for more details. The REST API will be available soon. Stay tuned!

Important Dates

  • Registration Deadline: September 21, 2025
  • Submission Deadline: September 28, 2025
  • Winners Announcement: September 30, 2025

🏆 Awards

Unitree Logo
Fourier Logo

Thanks to our sponsors at Unitree and Fourier: 🥇 First Prize ($1000) 🥈 Second Prize ($500) 🥉 Third Prize ($300)

Participants need to open-source their code to be eligible for awards. However, you can still participate in the challenge without open-sourcing your code. If you want to change your response regarding open-sourcing your code, please contact us.

Evaluation

Policy Specifications

Your policy must process standardized observations and output specific control signals.

Input (Observation Space)

The policy will receive the following observations at each timestep:

  • Joint positions and velocities ([joint_details])
  • Base orientation and angular velocity ([base_orientation_details])
  • Terrain height samples around the robot ([terrain_sampling_details])
  • Previous action values ([action_history_details])

Detailed observation specifications will be provided in our challenge repo. Stay tuned!

Output (Action Space)

Your policy must output:

  • Target joint positions ([joint_target_details])
Note: Your policy must align with the provided observation and action spaces.

Robots

There are 3 robots available for the challenge:

  • Unitree G1
  • Unitree H1-2
  • Fourier N1
You can start with any robot to participate in the challenge. To encourage the development of general algorithms, we will compare the scores across all robots. The more robots your algorithm works on, the higher your score will be.

Test Scenarios

Policies will be evaluated on three specialized scenarios (detailed in Scenarios):

  1. Robustness Testing: Extended terrain sequences testing endurance
  2. Generalization Testing: Novel terrain combinations testing adaptability
  3. Extreme Testing: Maximum-difficulty terrains testing capability limits

Scoring Metric

We compute two metrics for each evaluation episode:

Scoring Formula

Episode Score = Completion Rate × 0.9 + (1 - Efficiency Score) × 0.1

The scoring formula might change. Stay tuned!

The score will be averaged across all scenario episodes for each robot. The award is selected based on the total score summed over all robots.

Codes

Our open-source Terrain Benchmark is accessible in this repository.

Resources

Reference Papers

Scenarios

We have designed different scenarios to evaluate the different performance of robots. All terrain used for evaluation comes from the single terrain model of the terrain module provided in the code.

  1. Robustness Evaluation: To evaluate the robustness of the robot, we will extend the size of the terrain several times. Taking stair terrain as an example, we will expand the 5-10 steps used during training to 20 or even more steps to test the robustness of the robot.
    Stair for Robustness evaluation
    Stair terrain for Robustness evaluation
  2. Extreme Evaluation: To assess the upper limits of locomotion strategies, we conduct extreme terrain evaluations. For example, the height of stairs will be set significantly higher than in typical scenarios. These challenging conditions are designed to test the maximal capability of the trained humanoid policies, revealing potential failure modes and identifying areas for further improvement.
    Stair for Extreme evaluation
    Stair terrain for Extreme evaluation
  3. Generalization Evaluation: To evaluate the generalization performance of locomotion strategies, we create complex scenarios by combining multiple types of terrain within single environments. By requiring robots to transition smoothly and adaptively between different terrain elements—such as stairs, slopes, rubble, and narrow bridges—we can more rigorously assess their ability to generalize learned skills and robustly handle a variety of unseen or mixed conditions in real-world environments.
Simple terrain
Simple Terrain
Normal terrain
Normal Terrain
Hard terrain
Hard Terrain
Challenging terrain
Challenging Terrain

Rules

  1. Eligibility: Participation is open to academic, industry, and independent teams worldwide.
  2. Fair Play: Use of cheating techniques, hard-coding the test environment, or exploiting simulator bugs is strictly prohibited.
  3. Final Decisions: The organizing committee reserves the right to make final decisions regarding rule interpretation and winner selection.

Organizers