ArXiv TLDR

IndustryCode: A Benchmark for Industry Code Generation

🐦 Tweet
2604.02729

Puyu Zeng, Zhaoxi Wang, Zhixu Duan, Liang Feng, Shaobo Wang + 5 more

cs.SEcs.AIcs.CL

TLDR

IndustryCode is a new benchmark for evaluating LLMs' code generation across diverse industrial domains and programming languages.

Key contributions

  • Introduces IndustryCode, the first multi-domain, multi-language benchmark for industrial code generation.
  • Comprises 579 sub-problems from 125 industrial challenges across finance, aerospace, automation, and remote sensing.
  • Evaluates LLMs on diverse languages including MATLAB, Python, C++, and Stata.
  • Reveals top-performing Claude 4.5 Opus achieves 68.1% accuracy on sub-problems.

Why it matters

This paper addresses a critical gap in LLM evaluation by providing a comprehensive benchmark for industrial code generation. IndustryCode enables more realistic assessment of LLM generalization capabilities across complex, real-world scenarios. This will drive advancements in LLMs for industrial intelligence.

Original Abstract

Code generation and comprehension by Large Language Models (LLMs) have emerged as core drivers of industrial intelligence and decision optimization, finding widespread application in fields such as finance, automation, and aerospace. Although recent advancements have demonstrated the remarkable potential of LLMs in general code generation, existing benchmarks are mainly confined to single domains and languages. Consequently, they fail to effectively evaluate the generalization capabilities required for real-world industrial applications or to reflect the coding proficiency demanded by complex industrial scenarios. To bridge this gap, we introduce IndustryCode, the first comprehensive benchmark designed to span multiple industrial domains and programming languages. IndustryCode comprises 579 sub-problems derived from 125 primary industrial challenges, accompanied by rigorous problem descriptions and test cases. It covers a wide range of fields, including finance, automation, aerospace, and remote sensing-and incorporates diverse programming languages such as MATLAB, Python, C++, and Stata. In our evaluation, the top-performing model, Claude 4.5 Opus, achieved an overall accuracy of 68.1% on sub-problems and 42.5% main problems. The benchmark dataset and automated evaluation code will be made publicly available upon acceptance.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.