Abstract
Large Language Models (LLMs) have emerged as powerful support tools across various natural language tasks and a range of application domains. Recent studies focus on exploring their capabilities for data annotation. This paper provides a comparative overview of twelve studies investigating the potential of LLMs in labelling data. While the models demonstrate promising cost and time-saving benefits, there exist considerable limitations, such as representativeness, bias, sensitivity to prompt variations and English language preference. Leveraging insights from these studies, our empirical analysis further examines the alignment between human and GPT-generated opinion distributions across four subjective datasets. In contrast to the studies examining representation, our methodology directly obtains the opinion distribution from GPT. Our analysis thereby supports the minority of studies that are considering diverse perspectives when evaluating data annotation tasks and highlights the need for further research in this direction.
Abstract (translated)
大语言模型(LLMs)已成为各种自然语言任务和应用领域强大的支持工具。最近的研究主要集中在探索它们在数据标注方面的能力。本文对12项研究进行了比较,这些研究调查了LLMs在标注数据中的潜力。虽然这些模型显示出成本和时间节省的优势,但存在很大的局限性,例如代表性、偏见、对提示变化的敏感性和对英语偏好。利用这些研究的见解,我们的实证分析进一步研究了四个主观数据集上人类和GPT生成的意见分布。与研究表示性相比,我们的方法直接从GPT中获得了意见分布。因此,我们的分析支持少数研究者在评估数据标注任务时考虑了多样观点,并突出了在这一方向上需要进行进一步研究。
URL
https://arxiv.org/abs/2405.01299