Evaluation of Pose Estimation Systems for Sign Language Translation
Catherine O'Brien, Gerard Sant, Mathias Müller, Sarah Ebling
TLDR
This paper systematically evaluates pose estimators for sign language translation, finding SDPose and Sapiens outperform common baselines.
Key contributions
- Systematically compared 8 pose estimators for sign language translation (SLT) performance.
- Found SDPose and Sapiens achieved best SLT performance, outperforming MediaPipe by ~1.5 BLEU points.
- Analyzed pose estimator robustness to occlusion and impact of missing hand keypoints on translation.
- Released code to facilitate research and adoption of alternative pose estimators in SLT.
Why it matters
This research highlights the critical impact of pose estimator choice on sign language translation system performance. By systematically comparing various models and releasing evaluation code, it provides valuable guidance for researchers to select robust and accurate pose estimation methods, ultimately improving SLT systems.
Original Abstract
Many sign language translation (SLT) systems operate on pose sequences instead of raw video to reduce input dimensionality, improve portability, and partially anonymize signers. The choice of pose estimator is often treated as an implementation detail, with systems defaulting to widely available tools such as MediaPipe Holistic or OpenPose. We present a systematic comparison of pose estimators for pose-based SLT, covering widely used baselines (MediaPipe Holistic, OpenPose) and newer whole-body/high-capacity models (MMPose WholeBody, OpenPifPaf, AlphaPose, SDPose, Sapiens, SMPLest-X). We quantify downstream impact by training a controlled SLT pipeline on RWTH-PHOENIX-Weather 2014 where only the pose representation varies, evaluating with BLEU and BLEURT. To contextualize translation outcomes, we analyze temporal stability, missing hand keypoints, and robustness to occlusion using higher-resolution videos from the Signsuisse dataset. SDPose and Sapiens achieve the best translation performance (BLEU ~11.5), outperforming the common MediaPipe baseline (BLEU ~10). In occlusion cases, Sapiens is correct in all tested instances (15/15), while OpenPifPaf fails in nearly all (1/15) and also yields the weakest translation scores. Estimators that frequently leave out hand keypoints are associated with lower BLEU/BLEURT. We release code that can be used not only to reproduce our experiments, but also considerably lowers the barrier for other researchers to use alternative pose estimators.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.