ArXiv TLDR

EgoForce: Forearm-Guided Camera-Space 3D Hand Pose from a Monocular Egocentric Camera

🐦 Tweet
2605.12498

Christen Millerdurai, Shaoxiang Wang, Yaxu Xie, Vladislav Golyanik, Didier Stricker + 1 more

cs.CVcs.GR

TLDR

EgoForce reconstructs absolute 3D hand pose from a single egocentric camera, robustly handling diverse head-mounted device configurations.

Key contributions

  • Uses a differentiable forearm representation to stabilize hand pose estimation.
  • Employs a unified arm-hand transformer to predict geometry and mitigate depth-scale ambiguity.
  • Integrates a ray space closed-form solver for absolute 3D pose recovery across diverse camera models.
  • Achieves state-of-the-art 3D accuracy, reducing MPJPE by up to 28% on the HOT3D dataset.

Why it matters

Monocular 3D hand pose is crucial for AR/VR and telepresence but faces depth ambiguity and generalization issues. EgoForce overcomes these by providing robust, absolute 3D hand pose from diverse egocentric cameras, reducing the need for device-specific training. This advances practical, unobtrusive hand interaction.

Original Abstract

Reconstructing the absolute 3D pose and shape of the hands from the user's viewpoint using a single head-mounted camera is crucial for practical egocentric interaction in AR/VR, telepresence, and hand-centric manipulation tasks, where sensing must remain compact and unobtrusive. While monocular RGB methods have made progress, they remain constrained by depth-scale ambiguity and struggle to generalize across the diverse optical configurations of head-mounted devices. As a result, models typically require extensive training on device-specific datasets, which are costly and laborious to acquire. This paper addresses these challenges by introducing EgoForce, a monocular 3D hand reconstruction framework that recovers robust, absolute 3D hand pose and its position from the user's (camera-space) viewpoint. EgoForce operates across fisheye, perspective, and distorted wide-FOV camera models using a single unified network. Our approach combines a differentiable forearm representation that stabilizes hand pose, a unified arm-hand transformer that predicts both hand and forearm geometry from a single egocentric view, mitigating depth-scale ambiguity, and a ray space closed-form solver that enables absolute 3D pose recovery across diverse head-mounted camera models. Experiments on three egocentric benchmarks show that EgoForce achieves state-of-the-art 3D accuracy, reducing camera-space MPJPE by up to 28% on the HOT3D dataset compared to prior methods and maintaining consistent performance across camera configurations. For more details, visit the project page at https://dfki-av.github.io/EgoForce.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.