Abstract
A practical text-to-SQL system should generalize well on a wide variety of natural language questions, unseen database schemas, and novel SQL query structures. To comprehensively evaluate text-to-SQL systems, we introduce a \textbf{UNI}fied benchmark for \textbf{T}ext-to-SQL \textbf{E}valuation (UNITE). It is composed of publicly available text-to-SQL datasets, containing natural language questions from more than 12 domains, SQL queries from more than 3.9K patterns, and 29K databases. Compared to the widely used Spider benchmark \cite{yu-etal-2018-spider}, we introduce $\sim$120K additional examples and a threefold increase in SQL patterns, such as comparative and boolean questions. We conduct a systematic study of six state-of-the-art (SOTA) text-to-SQL parsers on our new benchmark and show that: 1) Codex performs surprisingly well on out-of-domain datasets; 2) specially designed decoding methods (e.g. constrained beam search) can improve performance for both in-domain and out-of-domain settings; 3) explicitly modeling the relationship between questions and schemas further improves the Seq2Seq models. More importantly, our benchmark presents key challenges towards compositional generalization and robustness issues -- which these SOTA models cannot address well.
Abstract (translated)
一个实用的文本到SQL系统应该对多种自然语言问题、未知的数据库表结构和新的SQL查询结构有很好的泛化能力。为了全面评估文本到SQL系统,我们提出了一个统一基准(UNITE),该基准由公开可用的文本到SQL数据集组成,包含来自超过12个领域的自然语言问题、超过3.9K种模式中的SQL查询和29K个数据库。与广泛使用的蜘蛛基准(yu-etal-2018-spider)相比,我们引入了大约120K个额外的示例和SQL模式,如比较和布尔问题。我们对六个最先进的文本到SQL解析器进行了系统性的研究,并在新基准上展示了:1) Codex在跨域数据上表现惊人;2)特别设计的解码方法(例如约束梁搜索)可以在跨域和内部域环境下提高性能;3) explicitly modeling问题和表结构的关系进一步改善了Seq2Seq模型。更重要的是,我们的基准提出了组成泛化和稳健问题的关键挑战,而这些SOTA模型无法很好地解决。
URL
https://arxiv.org/abs/2305.16265