Abstract
Recently, Large Language Models (LLMs) have been demonstrated to possess impressive capabilities in a variety of domains and tasks. We investigate the issue of prompt design in the multi-turn text-to-SQL task and attempt to enhance the LLMs' reasoning capacity when generating SQL queries. In the conversational context, the current SQL query can be modified from the preceding SQL query with only a few operations due to the context dependency. We introduce our method called CoE-SQL which can prompt LLMs to generate the SQL query based on the previously generated SQL query with an edition chain. We also conduct extensive ablation studies to determine the optimal configuration of our approach. Our approach outperforms different in-context learning baselines stably and achieves state-of-the-art performances on two benchmarks SParC and CoSQL using LLMs, which is also competitive to the SOTA fine-tuned models.
Abstract (translated)
最近,大型语言模型(LLMs)在各种领域和任务中展现了令人印象深刻的能力。我们研究了在多轮文本到 SQL 任务中,提示设计的问题,并试图在生成 SQL 查询时增强 LLMs 的推理能力。在会话背景下,仅通过几个操作就可以从之前的 SQL 查询修改当前的 SQL 查询,因为上下文相关。我们引入了一种名为 CoE-SQL 的方法,该方法可以根据之前生成的 SQL 查询,通过版本链提示 LLMs 生成 SQL 查询。我们还进行了广泛的消融研究,以确定我们方法的最佳配置。我们的方法在不同的上下文学习基线中表现出稳定的优异性能,并且使用 LLMs 在 SParC 和 CoSQL 基准测试中实现了与最先进的模型相当的表现,这也是与当前 SOTA 微调模型竞争激烈的。
URL
https://arxiv.org/abs/2405.02712