Round-Trip Translation Reveals What Frontier Multilingual Benchmarks Miss
Ronald Skorobogat, Ameya Prabhu, Matthias Bethge
TLDR
This paper introduces round-trip translation as a superior method to evaluate true multilingual proficiency in frontier models, revealing flaws in current benchmarks.
Key contributions
- Existing multilingual benchmarks inaccurately measure reasoning, not true language proficiency.
- Proposes round-trip translation to evaluate multilingual capability by detecting semantic gaps.
- Method correlates highly (ρ=0.94) with real-world user ratings on LMArena, without human refs.
- Introduces "Lost in Translation (LiT)", a new benchmark for realistic multilingual model evaluation.
Why it matters
Current multilingual evaluations are flawed, leading to models optimized for the wrong metrics. This paper offers a simple, effective, and human-reference-free method to truly assess multilingual proficiency. It provides a new benchmark, LiT, to guide future model development towards genuine multilingual understanding.
Original Abstract
Multilingual benchmarks guide the development of frontier models. Yet multilingual evaluations reported by frontier models are structured similar to popular reasoning and knowledge benchmarks, but across many languages. We show such benchmarks, and consequently multilingual evaluations, measure mathematical reasoning and factual recall, not multilingual proficiency. For example, thinking variants dramatically outperform instruct variants on these benchmarks, yet often perform worse on real-world multilingual tasks, such as LMArena. We propose a simple alternative: evaluate multilingual capability via round-trip translation. Given text in a source language, translate it to a target language and back; semantic gaps between the original and result expose failures in multilingual generation capabilities. Round-trip translation correlates almost perfectly (\r{ho} = 0.94) with user ratings on LMArena with our benchmark, requires no human reference translations, and does not require a more capable multilingual judge than tested models. Lastly, we introduce Lost in Translation (LiT), a challenging round-trip translation benchmark spanning widely spoken languages worldwide, for realistic evaluation of multilingual frontier models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.