The Humanoid–Object Interaction Challenge tasks participants with endowing simulated humanoid agents with the ability to perceive, reason about, and physically interact with everyday objects in realistic indoor scenes. Unlike pure locomotion benchmarks, here agents must combine contact‐rich manipulation, interactive perception, and semantic reasoning to complete high‑level tasks such as lying on a bed, pushing a fridge, picking up a box, and sitting on a chair.
Leveraging the Isaac Lab simulation platform and our open‑source HOIBench codebase, participants will develop HOI interaction strategies that enable humanoid robots to perform tasks such as lying on a bed, lifting a box, and touching objects. The competition provides standardized robots and scenario imports, allowing you to focus on designing advanced HOI interaction strategies.
Participants will be evaluated on two complementary axes:
A composite leaderboard score will combine these metrics, rewarding both robust success and elegant, human‑like manipulation behaviors.
All participant teams need to register by emailing iccv2025-hoi@outlook.com with team information. You will recieve an unique token for evaluating your solution.
Subject: [ICCV2025-HRSIC]Humanoid–Object Interaction Registration
Body:
Here is our registration information:
[Team Name], [Team Leader/Point of Contact], [Primary Contact Email]
1. [Team Member 1], [Affiliation]
2. [Team Member 2], [Affiliation]
...
Whether to open-source the code (for awards): [Yes/No]
Additional comments: [Any additional information or questions]
If you have any questions, feel free to reach out! We may also send updates regarding the challenge to registered teams.
We innovatively utilize a client-server architecture for policy evaluation. The server hosts the simulated evaluation environments, while clients evaluate their policies via our provided REST API. The REST API is designed to have the same interface as Isaac Lab, e.g., env.reset()
and env.step(action)
. Thus no modifications are needed for existing gym-compatible policies. The results will be shown on the public leaderboard.
Please refer to our challenge repo for more details. The REST API will be available soon. Stay tuned!
Thanks to our sponsors: 🥇 First Prize ($1000) 🥈 Second Prize ($500) 🥉 Third Prize ($300)
Your policy must process standardized observations and output specific control signals.
The policy will receive the following observations at each timestep:
Complete observation specifications will be provided in our challenge repo. Stay tuned!
Your policy must output:
There are 3 robots available for the challenge:
Policies will be evaluated on these specialized tasks:
Performance is measured using a weighted combination of:
Task Score = ( (Success Rate ) × 0.9 + (1 - Efficiency Score) × 0.1) * (Task weight)
Full evaluation details and reference times will be published in the starter kit.
Our Open-source HOI Benchmark is accessible in this repository. Only the provided robots—Unitree H1, Unitree G1, and SMPL Humanoid can be used for submissions. Any custom or other robots will not be counted toward the leaderboard score.
The Humanoid–Object Interaction Challenge defines the following six interaction tasks, with success criteria for each: