ArXiv TLDR

Dependency-Guided Repository-Level C-to-Rust Translation with Reinforcement Alignment

🐦 Tweet
2604.02852

Jia Feng, Wenjie Gan, Cuiyun Gao, Chaozheng Wang, Feng Luo + 3 more

cs.SE

TLDR

DepTrans is a new framework that automates C-to-Rust code migration using reinforcement learning and dependency-guided refinement, achieving high accuracy.

Key contributions

  • Introduces DepTrans, a framework for automated repository-level C-to-Rust code translation.
  • Uses Reinforcement-Aligned Syntax Training for improved generation quality via multi-task fine-tuning and RL.
  • Employs Dependency-Guided Iterative Refinement to capture and refine code based on cross-file dependencies.
  • Achieves significant performance gains (60.7% compilation, 43.5% accuracy) over strongest baselines.

Why it matters

This paper addresses the critical challenge of automating C-to-Rust migration, which is vital for enhancing software security and performance. DepTrans significantly advances this field by effectively handling complex dependencies and improving translation quality, demonstrating practical potential for industrial projects.

Original Abstract

Automating C-to-Rust migration is critical for improving software security without sacrificing performance. Traditional rule-based methods struggle with diverse C idioms, often producing rigid and unidiomatic Rust code. Large Language Models (LLMs), trained on massive code corpora, offer a promising alternative by leveraging cross-language generalization to generate more idiomatic and maintainable Rust code. However, several challenges remain. First, existing LLM-based approaches fail to handle cross-file dependencies effectively, either ignoring them or including entire files as context, which limits accurate dependency modeling. Second, complex dependencies and structured inputs and outputs make it difficult to verify syntactic correctness and functional equivalence at the repository level. Third, the lack of large-scale C-Rust parallel data constrains model performance. We propose DepTrans, a framework that combines model capability enhancement with structured inference. DepTrans introduces Reinforcement-Aligned Syntax Training to improve generation quality through multi-task fine-tuning and feedback-driven reinforcement learning. It further applies Dependency-Guided Iterative Refinement to capture fine-grained cross-file dependencies and iteratively refine generated Rust code. We construct a dataset of 85k training samples and a benchmark of 145 repository-level instances. Experiments show that DepTrans achieves a 60.7 percent compilation success rate and 43.5 percent computational accuracy, outperforming the strongest baseline by 22.8 and 17.3 percentage points. It also successfully builds 7 of 15 industrial C projects, demonstrating its practical potential.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.