TrustSQL: Benchmarking Text-to-SQL Reliability with Penalty-Based Scoring
CoRR(2024)
摘要
Text-to-SQL enables users to interact with databases using natural language,
simplifying the retrieval and synthesis of information. Despite the remarkable
success of large language models (LLMs) in translating natural language
questions into SQL queries, widespread deployment remains limited due to two
primary challenges. First, the effective use of text-to-SQL models depends on
users' understanding of the model's capabilities-the scope of questions the
model can correctly answer. Second, the absence of abstention mechanisms can
lead to incorrect SQL generation going unnoticed, thereby undermining trust in
the model's output. To enable wider deployment, it is crucial to address these
challenges in model design and enhance model evaluation to build trust in the
model's output. To this end, we introduce TrustSQL, a novel comprehensive
benchmark designed to evaluate text-to-SQL reliability-defined as a model's
ability to correctly handle any type of input question by generating correct
SQL queries for feasible questions and abstaining from generating infeasible
ones (e.g., due to schema incompatibility or functionalities beyond SQL). We
evaluate existing methods using a novel penalty-based scoring metric with two
modeling approaches: (1) pipeline-based methods combining SQL generators with
infeasible question detectors and SQL error detectors for abstention; and (2)
unified methods using a single model for the entire task. Our experimental
results reveal that achieving high scores under severe penalties requires
significant effort and provide a new perspective on developing text-to-SQL
models for safer deployment.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn