QuantCode-Bench: A Benchmark for Evaluating the Ability of Large Language Models to Generate Executable Algorithmic Trading Strategies
Alexey Khoroshilov, Alexey Chernysh, Orkhan Ekhtibarov, Nini Kamkia, Dmitry Zmitrovich
TLDR
QuantCode-Bench evaluates LLMs' ability to generate executable algorithmic trading strategies, revealing challenges beyond syntax.
Key contributions
- Introduces QuantCode-Bench, a benchmark for LLMs generating Backtrader algorithmic trading strategies.
- Features 400 diverse tasks from real-world sources and synthetic data for evaluation.
- Employs a multi-stage pipeline checking syntax, execution, trade generation, and semantic alignment.
- Reveals LLM limitations in financial logic, API usage, and semantic adherence, not just syntax.
Why it matters
This paper highlights the unique challenges of using LLMs for domain-specific code generation like algorithmic trading. It shows that success requires deep understanding of financial logic and API usage, not just general programming. This benchmark provides a crucial tool for advancing LLM capabilities in complex, real-world applications.
Original Abstract
Large language models have demonstrated strong performance on general-purpose programming tasks, yet their ability to generate executable algorithmic trading strategies remains underexplored. Unlike standard code benchmarks, trading-strategy generation requires simultaneous mastery of domain-specific financial logic, knowledge of a specialized API, and the ability to produce code that is not only syntactically correct but also leads to actual trades on historical data. In this work, we present QuantCode-Bench, a benchmark for the systematic evaluation of modern LLMs in generating strategies for the Backtrader framework from textual descriptions in English. The benchmark contains 400 tasks of varying difficulty collected from Reddit, TradingView, StackExchange, GitHub, and synthetic sources. Evaluation is conducted through a multi-stage pipeline that checks syntactic correctness, successful backtest execution, the presence of trades, and semantic alignment with the task description using an LLM judge. We compare state-of-the-art models in two settings: single-turn, where the strategy must be generated correctly on the first attempt, and agentic multi-turn, where the model receives iterative feedback and may repair its errors. We analyze the failure modes across different stages of the pipeline and show that the main limitations of current models are not related to syntax, but rather to the correct operationalization of trading logic, proper API usage, and adherence to task semantics. These findings suggest that trading strategy generation constitutes a distinct class of domain-specific code generation tasks in which success requires not only technical correctness, but also alignment between natural-language descriptions, financial logic, and the observable behavior of the strategy on data.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.