EgoEV-HandPose: Egocentric 3D Hand Pose Estimation and Gesture Recognition with Stereo Event Cameras
Luming Wang, Hao Shi, Jiajun Zhai, Kailun Yang, Kaiwei Wang
TLDR
EgoEV-HandPose uses stereo event cameras and a new dataset for robust egocentric 3D hand pose estimation and gesture recognition, outperforming RGB.
Key contributions
- Introduces EgoEV-HandPose, an end-to-end framework for 3D bimanual hand pose and gesture recognition.
- Proposes KeypointBEV, a stereo fusion module with iterative refinement for depth and kinematic consistency.
- Creates EgoEVHands, the first large-scale real-world stereo event-camera dataset for egocentric hand perception.
- Achieves state-of-the-art 3D hand pose (30.54mm MPJPE) and gesture recognition (86.87% accuracy).
Why it matters
This paper addresses critical limitations in egocentric hand perception by leveraging stereo event cameras, which excel in challenging conditions like low-light and fast motion. The new framework and dataset significantly advance the field, enabling more robust human-computer interaction and AR/VR experiences.
Original Abstract
Egocentric 3D hand pose estimation and gesture recognition are essential for immersive augmented/virtual reality, human-computer interaction, and robotics. However, conventional frame-based cameras suffer from motion blur and limited dynamic range, while existing event-based methods are hindered by ego-motion interference, monocular depth ambiguity, and the lack of large-scale real-world stereo datasets. To overcome these limitations, we propose EgoEV-HandPose, an end-to-end framework for joint 3D bimanual pose estimation and gesture recognition from stereo event streams. Central to our approach is KeypointBEV, a flexible stereo fusion module that lifts features into a canonical bird's-eye-view space and employs an iterative reprojection-guided refinement loop to progressively resolve depth uncertainty and enforce kinematic consistency. In addition, we introduce EgoEVHands, the first large-scale real-world stereo event-camera dataset for egocentric hand perception, containing 5,419 annotated sequences with dense 3D/2D keypoints across 38 gesture classes under varying illumination. Extensive experiments demonstrate that EgoEV-HandPose achieves state-of-the-art performance with an MPJPE of 30.54mm and 86.87% Top-1 gesture recognition accuracy, significantly outperforming RGB-based stereo and prior event-camera methods, particularly in low-light and bimanual occlusion scenarios, thereby setting a new benchmark for event-based egocentric perception. The established dataset and source code will be publicly released at https://github.com/ZJUWang01/EgoEV-HandPose.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.