ArXiv TLDR

Backtranslation Augmented Direct Preference Optimization for Neural Machine Translation

🐦 Tweet
2604.25702

Mehrdad Ghassabi, Spehr Rajabi, Hamidreza Baradaran Kashani, Sadra Hakim, Mahshid Keivandarian

cs.CL

TLDR

Introduces a reinforcement learning framework using Direct Preference Optimization to improve neural machine translation quality.

Key contributions

  • Proposes RL-based post-training with Direct Preference Optimization (DPO) for NMT error correction.
  • Requires only a general text corpus and expert feedback, human or AI, for iterative improvement.
  • Demonstrates significant COMET score boost from 0.703 to 0.747 on English-to-German translation.
  • Validates DPO as an efficient, stable method to enhance pre-trained NMT models via preference feedback.

Why it matters

This paper shows how reinforcement learning with preference feedback can effectively improve NMT models post-training. It offers a practical way to reduce translation errors without needing additional parallel data.

Original Abstract

Contemporary neural machine translation (NMT) systems are almost exclusively built by training on supervised parallel data. Despite the tremendous progress achieved, these systems still exhibit persistent translation errors. This paper proposes that a post-training paradigm based on reinforcement learning (RL) can effectively rectify such mistakes. We introduce a novel framework that requires only a general text corpus and an expert translator which can be either human or an AI system to provide iterative feedback. In our experiments, we focus specifically on English-to-German translation as a representative high-resource language pair. Crucially, we implement this RL-based post-training using Direct Preference Optimization (DPO). Applying our DPO-driven framework to the gemma3-1b model yields a significant improvement in translation quality, elevating it's COMET score from 0.703 to 0.747 on the English to German task. The results demonstrate that DPO offers an efficient and stable pathway for enhancing pre-trained NMT models through preference-based post-training.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.