ArXiv TLDR

Time Series Augmented Generation for Financial Applications

🐦 Tweet
2604.19633

Anton Kolonin, Alexey Glushchenko, Evgeny Bochkov, Abhishek Saxena

cs.AIcs.CE

TLDR

This paper introduces a new benchmark and framework (TSAG) to evaluate LLM reasoning for financial time-series analysis, showing strong tool-use accuracy.

Key contributions

  • Introduces a novel benchmark and methodology for evaluating LLM reasoning in financial time-series analysis.
  • Presents Time Series Augmented Generation (TSAG), a framework where LLMs delegate tasks to external tools.
  • Evaluates SOTA LLMs (e.g., GPT-4o, Llama 3) on a 100-question financial benchmark.
  • Demonstrates near-perfect tool-use accuracy and minimal hallucination in capable LLM agents.

Why it matters

Evaluating LLM reasoning for complex financial tasks is a major challenge. This paper provides a rigorous evaluation framework and benchmark, validating the tool-augmented paradigm. It sets a new standard for reliable financial AI research.

Original Abstract

Evaluating the reasoning capabilities of Large Language Models (LLMs) for complex, quantitative financial tasks is a critical and unsolved challenge. Standard benchmarks often fail to isolate an agent's core ability to parse queries and orchestrate computations. To address this, we introduce a novel evaluation methodology and benchmark designed to rigorously measure an LLM agent's reasoning for financial time-series analysis. We apply this methodology in a large-scale empirical study using our framework, Time Series Augmented Generation (TSAG), where an LLM agent delegates quantitative tasks to verifiable, external tools. Our benchmark, consisting of 100 financial questions, is used to compare multiple SOTA agents (e.g., GPT-4o, Llama 3, Qwen2) on metrics assessing tool selection accuracy, faithfulness, and hallucination. The results demonstrate that capable agents can achieve near-perfect tool-use accuracy with minimal hallucination, validating the tool-augmented paradigm. Our primary contribution is this evaluation framework and the corresponding empirical insights into agent performance, which we release publicly to foster standardized research on reliable financial AI.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.