Abstract
Content Warning: This work contains examples that potentially implicate stereotypes, associations, and other harms that could be offensive to individuals in certain social groups.} Large pre-trained language models are acknowledged to carry social biases towards different demographics, which can further amplify existing stereotypes in our society and cause even more harm. Text-to-SQL is an important task, models of which are mainly adopted by administrative industries, where unfair decisions may lead to catastrophic consequences. However, existing Text-to-SQL models are trained on clean, neutral datasets, such as Spider and WikiSQL. This, to some extent, cover up social bias in models under ideal conditions, which nevertheless may emerge in real application scenarios. In this work, we aim to uncover and categorize social biases in Text-to-SQL models. We summarize the categories of social biases that may occur in structured data for Text-to-SQL models. We build test benchmarks and reveal that models with similar task accuracy can contain social biases at very different rates. We show how to take advantage of our methodology to uncover and assess social biases in the downstream Text-to-SQL task. We will release our code and data.
Abstract (translated)
警告:本工作包含了可能对某些社会群体和个人造成 offensive stereotypes, associations, 和其他 harm 的例子。大型预训练语言模型承认具有针对不同人口统计的社会偏见,这可能会进一步加剧我们社会现有的偏见,并造成更多的 harm。文本到关系型数据库(Text-to-SQL)是一项重要的任务,主要由行政行业采用,不公平的决策可能导致灾难性的后果。然而,现有的文本到关系型数据库模型主要使用干净、中立的数据集,如蜘蛛和维基 SQL。这在一定程度上掩盖了模型在理想条件下可能存在的社会偏见,但即便如此,也可能在 real-world 应用场景中出现。在这项工作中,我们的目标是揭露和分类文本到关系型数据库模型中的社会偏见。我们总结了文本到关系型数据库模型中的 structured data 可能涉及的社会偏见类别。我们建立了测试基准,并表明具有类似任务精度的模型可以以极不同的速率包含社会偏见。我们展示了如何利用我们的方法和方法揭露和评估后续文本到关系型数据库任务中的社会偏见。我们将发布我们的代码和数据。
URL
https://arxiv.org/abs/2305.16253