ArXiv TLDR

IndiaFinBench: An Evaluation Benchmark for Large Language Model Performance on Indian Financial Regulatory Text

🐦 Tweet
2604.19298

Rajveer Singh Pall

cs.CLcs.AIcs.IR

TLDR

IndiaFinBench is the first public benchmark for evaluating LLMs on Indian financial regulatory text, featuring 406 expert-annotated Q&A pairs.

Key contributions

  • Introduces IndiaFinBench, the first public benchmark for LLMs on Indian financial regulatory text.
  • Features 406 expert-annotated Q&A pairs from SEBI/RBI documents across four financial task types.
  • Evaluates 12 LLMs (70.4%-89.7% accuracy), all outperforming a 60% human baseline.
  • Identifies numerical reasoning as the most discriminative task, revealing three distinct performance tiers.

Why it matters

This paper addresses a critical gap in financial NLP by providing the first benchmark for non-Western regulatory text, specifically India. It enables robust evaluation of LLMs on complex, real-world financial data, crucial for developing reliable AI. The findings highlight areas for model improvement.

Original Abstract

We introduce IndiaFinBench, to our knowledge the first publicly available evaluation benchmark for assessing large language model (LLM) performance on Indian financial regulatory text. Existing financial NLP benchmarks draw exclusively from Western financial corpora (SEC filings, US earnings reports, and English-language financial news), leaving a significant gap in coverage of non-Western regulatory frameworks. IndiaFinBench addresses this gap with 406 expert-annotated question-answer pairs drawn from 192 documents sourced from the Securities and Exchange Board of India (SEBI) and the Reserve Bank of India (RBI), spanning four task types: regulatory interpretation (174 items), numerical reasoning (92 items), contradiction detection (62 items), and temporal reasoning (78 items). Annotation quality is validated through a model-based secondary pass (kappa=0.918 on contradiction detection) and a 60-item human inter-annotator agreement evaluation (kappa=0.611; 76.7% overall agreement). We evaluate twelve models under zero-shot conditions, with accuracy ranging from 70.4% (Gemma 4 E4B) to 89.7% (Gemini 2.5 Flash). All models substantially outperform a non-specialist human baseline of 60.0%. Numerical reasoning is the most discriminative task, with a 35.9 percentage-point spread across models. Bootstrap significance testing (10,000 resamples) reveals three statistically distinct performance tiers. The dataset, evaluation code, and all model outputs are available at https://github.com/rajveerpall/IndiaFinBench

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.