NeuroLip: An Event-driven Spatiotemporal Learning Framework for Cross-Scene Lip-Motion-based Visual Speaker Recognition
Junguang Yao, Wenye Liu, Stjepan Picek, Yue Zheng
TLDR
NeuroLip is an event-driven framework for robust cross-scene visual speaker recognition using lip motion, outperforming traditional methods.
Key contributions
- Proposes NeuroLip, an event-driven framework for cross-scene lip-motion speaker recognition.
- Introduces Temporal-aware Voxel Encoding and Structure-aware Spatial Enhancer modules.
- Utilizes Polarity Consistency Regularization to preserve motion-direction cues.
- Releases DVSpeaker, a new 50-subject event-based lip-motion dataset.
Why it matters
Lip-motion-based speaker recognition offers a silent, robust biometric solution, especially when audio is unavailable. This paper addresses challenges in capturing fine-grained lip dynamics across varying scenes using event cameras. NeuroLip provides a significant advancement in cross-scene generalization for visual speaker recognition.
Original Abstract
Visual speaker recognition based on lip motion offers a silent, hands-free, and behavior-driven biometric solution that remains effective even when acoustic cues are unavailable. Compared to traditional methods that rely heavily on appearance-dependent representations, lip motion encodes subject-specific behavioral dynamics driven by consistent articulation patterns and muscle coordination, offering inherent stability across environmental changes. However, capturing these robust, fine-grained dynamics is challenging for conventional frame-based cameras due to motion blur and low dynamic range. To exploit the intrinsic stability of lip motion and address these sensing limitations, we propose NeuroLip, an event-based framework that captures fine-grained lip dynamics under a strict yet practical cross-scene protocol: training is performed under a single controlled condition, while recognition must generalize to unseen viewing and lighting conditions. NeuroLip features a 1) Temporal-aware Voxel Encoding module with adaptive event weighting, 2) Structure-aware Spatial Enhancer that amplifies discriminative behavioral patterns by suppressing noise while preserving vertically structured motion information, and 3) Polarity Consistency Regularization mechanism to retain motion-direction cues encoded in event polarities. To facilitate systematic evaluation, we introduce DVSpeaker, a comprehensive event-based lip-motion dataset comprising 50 subjects recorded under four distinct viewpoint and illumination scenarios. Extensive experiments demonstrate that NeuroLip achieves near-perfect matched-scene accuracy and robust cross-scene generalization, attaining over 71% accuracy on unseen viewpoints and nearly 76% under low-light conditions, outperforming representative existing methods by at least 8.54%. The dataset and code are publicly available at https://github.com/JiuZeongit/NeuroLip.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.