This challenge focuses on developing robust locomotion controllers for humanoid robots to navigate complex, unstructured terrains - a critical capability for real-world deployment in homes, disaster zones, and natural environments.
Using the high-performance Isaac Gym simulator, participants will create locomotion policies that enable humanoids to traverse diverse surfaces including slopes, stairs and rubble terrain. The competition provides standardized robots and terrains, allowing you to focus on advanced locomotion strategies.
Your goal is twofold:
The final score combines both distance coverage and efficiency metrics.
All participant teams need to register by emailing challenge-terrain@outlook.com with team information. You will recieve an unique token for evaluating your solution.
Subject: [ICCV2025-HRSIC]Humanoid Locomotion Challenge Registration
Body:
Here is our registration information of [Team Name]:
1. Point of Contact: [Team Member 1], [Affiliation], [Email]
2. [Team Member 2], [Affiliation]
...
Whether to open-source the code (for awards): [Yes/No]
Additional comments: [Any additional information or questions]
If you have any questions, feel free to reach out! We may also send updates regarding the challenge to registered teams.
We innovatively utilize a client-server architecture for policy evaluation. The server hosts the simulated evaluation environments, while clients evaluate their policies via our provided REST API. The REST API is designed to have the same interface as gym, e.g., env.reset()
and env.step(action)
. Thus no modifications are needed for existing gym-compatible policies. The results will be shown on the public leaderboard.
Please refer to our challenge repo for more details. The REST API will be available soon. Stay tuned!
Thanks to our sponsors at Unitree and Fourier: 🥇 First Prize ($1000) 🥈 Second Prize ($500) 🥉 Third Prize ($300)
Your policy must process standardized observations and output specific control signals.
The policy will receive the following observations at each timestep:
Detailed observation specifications will be provided in our challenge repo. Stay tuned!
Your policy must output:
There are 3 robots available for the challenge:
Policies will be evaluated on three specialized scenarios (detailed in Scenarios):
We compute two metrics for each evaluation episode:
Episode Score = Completion Rate × 0.9 + (1 - Efficiency Score) × 0.1
The scoring formula might change. Stay tuned!
The score will be averaged across all scenario episodes for each robot. The award is selected based on the total score summed over all robots.
Our open-source Terrain Benchmark is accessible in this repository.
We have designed different scenarios to evaluate the different performance of robots. All terrain used for evaluation comes from the single terrain model of the terrain module provided in the code.