ROSE: An Intent-Centered Evaluation Metric for NL2SQL
Wenqi Pei, Shizheng Hou, Boyan Li, Han Chen, Zhichao Shi + 1 more
TLDR
ROSE is a new intent-centered metric for NL2SQL that reliably evaluates if predicted SQL answers the user's question, outperforming existing methods.
Key contributions
- Introduces ROSE, an intent-centered metric for evaluating NL2SQL solutions.
- Employs a Prover-Refuter cascade to assess semantic correctness against user intent.
- Achieves 24% higher agreement with human experts than other metrics on ROSE-VEC.
- Re-evaluates 19 NL2SQL methods, providing new insights into their performance.
Why it matters
Current NL2SQL evaluation (EX) is unreliable due to sensitivity to syntax and ground-truth errors. ROSE provides a more robust and accurate way to assess NL2SQL models by focusing on user intent. This enables more reliable research and development in the NL2SQL field.
Original Abstract
Execution Accuracy (EX), the widely used metric for evaluating the effectiveness of Natural Language to SQL (NL2SQL) solutions, is becoming increasingly unreliable. It is sensitive to syntactic variation, ignores that questions may admit multiple interpretations, and is easily misled by erroneous ground-truth SQL. To address this, we introduce ROSE, an intent-centered metric that focuses on whether the predicted SQL answers the question, rather than consistency with the ground-truth SQL under the reference-dependent paradigm. ROSE employs an adversarial Prover-Refuter cascade: SQL Prover assesses the semantic correctness of a predicted SQL against the user's intent independently, while Adversarial Refuter uses the ground-truth SQL as evidence to challenge and refine this judgment. On our expert-aligned validation set ROSE-VEC, ROSE achieves the best agreement with human experts, outperforming the next-best metric by nearly 24% in Cohen's Kappa. We also conduct a largescale re-evaluation of 19 NL2SQL methods, revealing four valuable insights. We release ROSE and ROSE-VEC to facilitate more reliable NL2SQL research.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.