Humanoid–Object Interaction Challenge

Overview

The Humanoid–Object Interaction Challenge tasks participants with endowing simulated humanoid agents with the ability to perceive, reason about, and physically interact with everyday objects in realistic indoor scenes. Unlike pure locomotion benchmarks, here agents must combine contact‐rich manipulation, interactive perception, and semantic reasoning to complete high‑level tasks such as lying on a bed, pushing a fridge, picking up a box, and sitting on a chair.

Leveraging the Isaac Lab simulation platform and our open‑source HOIBench codebase, participants will develop HOI interaction strategies that enable humanoid robots to perform tasks such as lying on a bed, lifting a box, and touching objects. The competition provides standardized robots and scenario imports, allowing you to focus on designing advanced HOI interaction strategies.

Challenge Objectives

Participants will be evaluated on two complementary axes:

  1. Maximize Task Success Rate – percentage of trials in which the humanoid completes the specified interaction (e.g. lifts the box by ≥0.2 m, sits stably for ≥0.3 s).
  2. Minimize completion time – perform each interaction (e.g. sit, lie, lift) as quickly as possible

A composite leaderboard score will combine these metrics, rewarding both robust success and elegant, human‑like manipulation behaviors.

Participation

Registration

All participant teams need to register by emailing iccv2025-hoi@outlook.com with team information. You will recieve an unique token for evaluating your solution.

Registration template

Subject: [ICCV2025-HRSIC]Humanoid–Object Interaction Registration

Body:
Here is our registration information:
[Team Name], [Team Leader/Point of Contact], [Primary Contact Email]
1. [Team Member 1], [Affiliation]
2. [Team Member 2], [Affiliation]
...
Whether to open-source the code (for awards): [Yes/No]
Additional comments: [Any additional information or questions]

If you have any questions, feel free to reach out! We may also send updates regarding the challenge to registered teams.

Submission

We innovatively utilize a client-server architecture for policy evaluation. The server hosts the simulated evaluation environments, while clients evaluate their policies via our provided REST API. The REST API is designed to have the same interface as Isaac Lab, e.g., env.reset() and env.step(action). Thus no modifications are needed for existing gym-compatible policies. The results will be shown on the public leaderboard.

Please refer to our challenge repo for more details. The REST API will be available soon. Stay tuned!

Important Dates

  • Registration Deadline: September 21, 2025
  • Submission Deadline: September 28, 2025
  • Winners Announcement: September 30, 2025

🏆 Awards

Thanks to our sponsors: 🥇 First Prize ($1000) 🥈 Second Prize ($500) 🥉 Third Prize ($300)

Participants need to open-source their code to be eligible for awards. However, you can still participate in the challenge without open-sourcing your code. If you want to change your initial response regarding open-sourcing your code, please contact us.

Evaluation

Policy Specifications

Your policy must process standardized observations and output specific control signals.

Input (Observation Space)

The policy will receive the following observations at each timestep:

  • Agent self state: joint positions & velocities, base orientation & angular velocity ([self_obs])
  • Object state: bounding box dimensions, center position, and root‐node rotation ([obj_obs])
  • Previous action values ([historical_self_obs])

Complete observation specifications will be provided in our challenge repo. Stay tuned!

Output (Action Space)

Your policy must output:

  • Target joint positions ([actions])
Note: Your policy must align with the provided observation space.

Robots

There are 3 robots available for the challenge:

  • Unitree G1
  • Unitree H1
  • SMPL Humanoid

Test Tasks

Policies will be evaluated on these specialized tasks:

  1. Robustness Evaluation: Robustness Evaluation: To assess the agent’s stability under varied object placements, objects are spawned at random positions around the agent at the start of each trial. Successful policies must locate, approach, and complete the interaction (e.g., lie, push, lift, sit) despite these randomized object positions. (Task weight = 0.7)
  2. Generalization Evaluation: To measure adaptability, agents are tested on previously unseen object instances from the same category. This ensures the policy can coherently interact with new bed, box, or chair models without any additional fine‑tuning.(Task weight = 0.3)

Scoring Metric

Performance is measured using a weighted combination of:

Scoring Formula

Score Metric Keep Tuned!

Task Score = ( (Success Rate ) × 0.9 + (1 - Efficiency Score) × 0.1) * (Task weight)

Example Calculation:
If a robot gets 60% Success Rate and 40% Completion Rate of a task in Robustness Evaluation with 500 timesteps:
The baseline finished the Robustness Evaluation used 400 timesteps
Task Score = (60 * 0.9 + (100-60) * 0.1)* 0.8 = 57.2

Full evaluation details and reference times will be published in the starter kit.

Open Source

Our Open-source HOI Benchmark is accessible in this repository. Only the provided robots—Unitree H1, Unitree G1, and SMPL Humanoid can be used for submissions. Any custom or other robots will not be counted toward the leaderboard score.

Resources

Reference Papers

Tasks

The Humanoid–Object Interaction Challenge defines the following six interaction tasks, with success criteria for each:

  1. Lie: The pelvis, ankles, and head must remain on the bed surface for at least 0.3 s, fully within the bed area in a bird’s‑eye view; the head height must be between [H, H+0.4 m], where H is the bed height.
  2. Push: The box’s center of mass must be within 0.1 m of the target point and hold that position for at least 0.5 s.
  3. Touch: The end‑effector (hand) must reach within 1 cm of the target point and make contact within 1 s.
  4. Lift: The box must be lifted by at least 0.2 m, and in the final frame both wrist joints must remain within 0.1 m of the box surface.
  5. Sit: The pelvis joint must lie within the seating area in a bird’s‑eye view, at a height between [H, H+0.27 m] (H = seat height), and sustain that pose for at least 0.3 s.
  6. Claw: At claw start, the box must sit at least 0.1 m above the table, with both wrists within 0.1 m of the box surface; After placement, the box’s center of mass must be within 0.1 m of the target and maintain that for at least 0.5 s.

Rules

  1. Eligibility: Participation is open to academic, industry, and independent teams worldwide. Each team may submit only one entry.
  2. Robot Requirements:Only the provided robots—Unitree H1, Unitree G1, and SMPL Humanoid can be used for submissions. Any custom or other robots will not be counted toward the leaderboard score.
  3. Submission: Teams must submit both their trained HOI policy and the model inference code according to the provided submission guidelines. All entries must be reproducible.
  4. Evaluation: All policies will be evaluated based on the percentage of tasks successfully completed and the time taken to complete them, with faster completions receiving higher scores.
  5. Fair Play: Use of cheating techniques, hard-coding the test environment, or exploiting simulator bugs is strictly prohibited.
  6. Final Decisions: The organizing committee reserves the right to make final decisions regarding rule interpretation and winner selection.

Organizers

Sponsors