Can Coding Agents Reproduce Findings in Computational Materials Science?
Ziyang Huang, Yi Cao, Ali K. Shargh, Jing Luo, Ruidong Mei + 13 more
TLDR
AutoMat benchmarks LLM coding agents' ability to reproduce computational materials science findings, revealing current agents achieve low success rates.
Key contributions
- Introduces AutoMat, a benchmark for LLM agents in computational materials science reproducibility.
- AutoMat challenges agents to recover procedures, navigate toolchains, and interpret scientific results.
- Current LLM agents achieve low success rates (best 54.1%), struggling with underspecified workflows.
- Identifies key failure modes: incomplete procedures, methodological deviations, and execution fragility.
Why it matters
This paper highlights a critical gap in LLM agents' capabilities for scientific research, particularly in computational materials science. AutoMat serves as a crucial benchmark and diagnostic tool, guiding future development of agents capable of complex, domain-specific scientific workflows.
Original Abstract
Large language models are increasingly deployed as autonomous coding agents and have achieved remarkably strong performance on software engineering benchmarks. However, it is unclear whether such success transfers to computational scientific workflows, where tasks require not only strong coding ability, but also the ability to navigate complex, domain-specific procedures and to interpret results in the context of scientific claims. To address this question, we present AutoMat, a benchmark for evaluating LLM-based agents' ability to reproduce claims from computational materials science. AutoMat poses three interrelated challenges: recovering underspecified computational procedures, navigating specialized toolchains, and determining whether the resulting evidence supports a claim. By working closely with subject matter experts, we curate a set of claims from real materials science papers to test whether coding agents can recover and execute the end-to-end workflow needed to support (or undermine) such claims. We then evaluate multiple representative coding agent settings across several foundation models. Our results show that current LLM-based agents obtain low overall success rates on AutoMat, with the best-performing setting achieving a success rate of only 54.1%. Error analysis further reveals that agents perform worst when workflows must be reconstructed from paper text alone and that they fail primarily due to incomplete procedures, methodological deviations, and execution fragility. Taken together, these findings position AutoMat as both a benchmark for computational scientific reproducibility and a tool for diagnosing the current limitations of agentic systems in AI-for-science settings.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.