ArXiv TLDR

DeVI: Physics-based Dexterous Human-Object Interaction via Synthetic Video Imitation

🐦 Tweet
2604.20841

Hyeonwoo Kim, Jeonghwan Kim, Kyungwon Cho, Hanbyul Joo

cs.CV

TLDR

DeVI enables physics-based dexterous human-object interaction control using synthetic videos and a novel hybrid 2D/3D tracking reward.

Key contributions

  • DeVI framework uses text-conditioned synthetic videos for physics-based dexterous human-object interaction.
  • Introduces a hybrid tracking reward combining 3D human and robust 2D object tracking for precision.
  • Achieves zero-shot generalization to diverse objects and interactions, requiring only generated video.
  • Outperforms prior methods in dexterous hand-object interactions and supports text-driven action diversity.

Why it matters

This paper addresses the challenge of using readily available synthetic videos for realistic robotic manipulation. By enabling physics-based control from 2D video, DeVI opens new avenues for training dexterous robots without costly 3D motion capture. It significantly broadens the scope of learnable interactions.

Original Abstract

Recent advances in video generative models enable the synthesis of realistic human-object interaction videos across a wide range of scenarios and object categories, including complex dexterous manipulations that are difficult to capture with motion capture systems. While the rich interaction knowledge embedded in these synthetic videos holds strong potential for motion planning in dexterous robotic manipulation, their limited physical fidelity and purely 2D nature make them difficult to use directly as imitation targets in physics-based character control. We present DeVI (Dexterous Video Imitation), a novel framework that leverages text-conditioned synthetic videos to enable physically plausible dexterous agent control for interacting with unseen target objects. To overcome the imprecision of generative 2D cues, we introduce a hybrid tracking reward that integrates 3D human tracking with robust 2D object tracking. Unlike methods relying on high-quality 3D kinematic demonstrations, DeVI requires only the generated video, enabling zero-shot generalization across diverse objects and interaction types. Extensive experiments demonstrate that DeVI outperforms existing approaches that imitate 3D human-object interaction demonstrations, particularly in modeling dexterous hand-object interactions. We further validate the effectiveness of DeVI in multi-object scenes and text-driven action diversity, showcasing the advantage of using video as an HOI-aware motion planner.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.