FINER-SQL: Boosting Small Language Models for Text-to-SQL
Thanh Dat Hoang, Thanh Trung Huynh, Matthias Weidlich, Thanh Tam Nguyen, Tong Chen + 2 more
TLDR
FINER-SQL boosts small language models for Text-to-SQL via fine-grained RL feedback, achieving LLM-level accuracy with lower latency.
Key contributions
- Proposes FINER-SQL, an RL framework for SLMs using fine-grained execution feedback.
- Replaces sparse rewards with dense, interpretable feedback for continuous learning.
- Introduces memory reward for semantic stability and atomic reward for partial structural credit.
- Achieves LLM-level accuracy (up to 85%) with a 3B model, reducing latency to 5.57s/sample.
Why it matters
This paper addresses the limitations of large language models for Text-to-SQL, offering a cost-efficient and privacy-preserving alternative. By enabling small language models to achieve high accuracy, FINER-SQL makes advanced Text-to-SQL practical for real-world, on-premise deployments.
Original Abstract
Large language models have driven major advances in Text-to-SQL generation. However, they suffer from high computational cost, long latency, and data privacy concerns, which make them impractical for many real-world applications. A natural alternative is to use small language models (SLMs), which enable efficient and private on-premise deployment. Yet, SLMs often struggle with weak reasoning and poor instruction following. Conventional reinforcement learning methods based on sparse binary rewards (0/1) provide little learning signal when the generated SQLs are incorrect, leading to unstable or collapsed training. To overcome these issues, we propose FINER-SQL, a scalable and reusable reinforcement learning framework that enhances SLMs through fine-grained execution feedback. Built on group relative policy optimization, FINER-SQL replaces sparse supervision with dense and interpretable rewards that offer continuous feedback even for incorrect SQLs. It introduces two key reward functions: a memory reward, which aligns reasoning with verified traces for semantic stability, and an atomic reward, which measures operation-level overlap to grant partial credit for structurally correct but incomplete SQLs. This approach transforms discrete correctness into continuous learning, enabling stable, critic-free optimization. Experiments on the BIRD and Spider benchmarks show that FINER-SQL achieves up to 67.73\% and 85\% execution accuracy with a 3B model -- matching much larger LLMs while reducing inference latency to 5.57~s/sample. These results highlight a cost-efficient and privacy-preserving path toward high-performance Text-to-SQL generation. Our code is available at https://github.com/thanhdath/finer-sql.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.