ArXiv TLDR

Boosting Automatic Java-to-Cangjie Translation with Multi-Stage LLM Training and Error Repair

🐦 Tweet
2605.07403

Xinyue Liang, Jingxuan Zhang, Lin Li, Jun Zhang, Junhao Chen

cs.SE

TLDR

A multi-stage LLM training framework with iterative error repair significantly improves Java-to-Cangjie code translation, boosting functional equivalence.

Key contributions

  • Proposes a multi-stage LLM training framework for Java-to-Cangjie code translation.
  • Integrates knowledge, semantic alignment, and structure awareness through iterative training.
  • Employs compiler feedback and error repair case retrieval to fix incorrect Cangjie code.
  • Achieves 6.06% higher functional equivalence than state-of-the-art with limited parallel data.

Why it matters

This paper addresses the critical challenge of translating popular programming languages to low-resource ones like Cangjie, which lacks data. By improving automated translation, it accelerates development in emerging language ecosystems. Its novel approach significantly boosts translation quality, making it practical for real-world use.

Original Abstract

With the rapid evolution of emerging programming language ecosystems, the demand for code translation to low-resource languages continues to grow. As Cangjie emerges as a new programming language, its ecosystem and development toolchains are rapidly expanding. Automated translation from popular programming languages to Cangjie is therefore valuable for practical development. However, constrained by both insufficient Cangjie knowledge and scarce parallel code corpora, general Large Language Models (LLMs) are prone to syntactic errors and semantic as well as structural misalignment in code translation. Existing approaches typically rely on fine-tuning with large-scale parallel data, but they cannot reliably improve compilability or semantic consistency for low-resource Cangjie languages. To tackle these challenges, we propose a multi-stage training framework of LLMs that employs the iterative error repair technique to translate Java code into Cangjie code. This training framework performs training on LLMs, gradually integrating knowledge and achieving semantic alignment as well as structure awareness. During the code translation, we also combine the compiler feedback and error repair case retrieval to repair the incorrect Cangjie code. We construct syntactic knowledge and monolingual instruction datasets to train the LLM. In addition, we also build a Cangjie error repair repository to support error repair in our approach. Experimental results show that, with limited parallel data, our approach improves functional equivalence by 6.06\% compared to the state-of-the-art approaches. Meanwhile, ablation studies confirm that each training stage positively contributes to the final performance.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.