Abstract
This study introduces a taxonomy of stereotype content in contemporary large language models (LLMs). We prompt ChatGPT 3.5, Llama 3, and Mixtral 8x7B, three powerful and widely used LLMs, for the characteristics associated with 87 social categories (e.g., gender, race, occupations). We identify 14 stereotype dimensions (e.g., Morality, Ability, Health, Beliefs, Emotions), accounting for ~90% of LLM stereotype associations. Warmth and Competence facets were the most frequent content, but all other dimensions were significantly prevalent. Stereotypes were more positive in LLMs (vs. humans), but there was significant variability across categories and dimensions. Finally, the taxonomy predicted the LLMs' internal evaluations of social categories (e.g., how positively/negatively the categories were represented), supporting the relevance of a multidimensional taxonomy for characterizing LLM stereotypes. Our findings suggest that high-dimensional human stereotypes are reflected in LLMs and must be considered in AI auditing and debiasing to minimize unidentified harms from reliance in low-dimensional views of bias in LLMs.
Abstract (translated)
本研究介绍了一种在当代大型语言模型(LLMs)中刻板内容分类法。我们对 ChatGPT 3.5、Llama 3 和 Mixtral 8x7B 等三个强大且广泛使用的 LLMs 提示具有 87 个社会类别(如性别、种族、职业等)的刻板特征。我们确定了 14 个刻板维度(如道德、能力、健康、信仰、情感等),这些维度解释了 LLM 刻板关联的~90%。热情和能力方面是最常见的内容,但其他维度也显著存在。刻板在 LLMs(而非人类)中更加积极,但不同类别和维度之间存在显著的变异性。最后,分类法预测了 LLMs 对社会类别的内部评估(例如,类别被积极/消极地代表),支持了多维度分类法在描述 LLM 刻板方面的重要性。我们的研究结果表明,高维度人类刻板在 LLMs 中有所反映,在 AI 审核和去偏见过程中,必须考虑这些刻板印象以最小化来自对 LLM 中偏见低维度观点的依赖所带来的未识别伤害。
URL
https://arxiv.org/abs/2408.00162