ArXiv TLDR

E-3DPSM: A State Machine for Event-Based Egocentric 3D Human Pose Estimation

🐦 Tweet
2604.08543

Mayur Deshmukh, Hiroyasu Akada, Helge Rhodin, Christian Theobalt, Vladislav Golyanik

cs.CV

TLDR

E-3DPSM introduces an event-driven state machine for egocentric 3D human pose estimation, significantly improving accuracy and stability.

Key contributions

  • Introduces E-3DPSM, an event-driven continuous pose state machine for egocentric 3D human pose estimation.
  • Aligns continuous human motion with fine-grained event dynamics for stable and drift-free 3D pose.
  • Achieves real-time performance (80 Hz) and sets new state-of-the-art on benchmarks.
  • Improves 3D estimation accuracy by up to 19% and temporal stability by up to 2.7x.

Why it matters

Existing event-based 3D pose methods struggle with accuracy and stability, limiting applications like VR/AR. E-3DPSM overcomes these issues by leveraging event camera properties more effectively. This leads to more reliable and precise egocentric 3D human pose estimation, crucial for immersive technologies.

Original Abstract

Event cameras offer multiple advantages in monocular egocentric 3D human pose estimation from head-mounted devices, such as millisecond temporal resolution, high dynamic range, and negligible motion blur. Existing methods effectively leverage these properties, but suffer from low 3D estimation accuracy, insufficient in many applications (e.g., immersive VR/AR). This is due to the design not being fully tailored towards event streams (e.g., their asynchronous and continuous nature), leading to high sensitivity to self-occlusions and temporal jitter in the estimates. This paper rethinks the setting and introduces E-3DPSM, an event-driven continuous pose state machine for event-based egocentric 3D human pose estimation. E-3DPSM aligns continuous human motion with fine-grained event dynamics; it evolves latent states and predicts continuous changes in 3D joint positions associated with observed events, which are fused with direct 3D human pose predictions, leading to stable and drift-free final 3D pose reconstructions. E-3DPSM runs in real-time at 80 Hz on a single workstation and sets a new state of the art in experiments on two benchmarks, improving accuracy by up to 19% (MPJPE) and temporal stability by up to 2.7x. See our project page for the source code and trained models.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.