Abstract
Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems (TDSs). Obtaining high-quality and consistent ground-truth labels from annotators presents challenges. When evaluating a TDS, annotators must fully comprehend the dialogue before providing judgments. Previous studies suggest using only a portion of the dialogue context in the annotation process. However, the impact of this limitation on label quality remains unexplored. This study investigates the influence of dialogue context on annotation quality, considering the truncated context for relevance and usefulness labeling. We further propose to use large language models (LLMs) to summarize the dialogue context to provide a rich and short description of the dialogue context and study the impact of doing so on the annotator's performance. Reducing context leads to more positive ratings. Conversely, providing the entire dialogue context yields higher-quality relevance ratings but introduces ambiguity in usefulness ratings. Using the first user utterance as context leads to consistent ratings, akin to those obtained using the entire dialogue, with significantly reduced annotation effort. Our findings show how task design, particularly the availability of dialogue context, affects the quality and consistency of crowdsourced evaluation labels.
Abstract (translated)
众包标签在评估面向任务的对话系统(TDS)中扮演着关键角色。从标注者那里获得高质量且一致的地面真实标签存在挑战。在评估TDS时,标注者必须全面理解对话内容,然后提供判断。之前的研究建议在标注过程中仅使用对话的一部分。然而,这个限制对标签质量的影响尚未被探索。本研究探讨了对话内容对标注质量的影响,考虑了相关性和有用性标注的截断语境。我们进一步提出使用大型语言模型(LLMs)对对话内容进行总结,以提供丰富和简洁的对话背景,并研究其对标注者性能的影响。减少语境会导致更高的评分。相反,提供完整的对话语境会得到更高质量的关联评分,但会引入有用性评级的模糊性。以第一用户的说话内容为上下文会导致一致的评分,类似于使用完整对话获得的评分,同时减少了标注的工作量。我们的研究结果表明,任务设计(特别是对话上下文的可用性)如何影响众包评估标签的质量和一致性。
URL
https://arxiv.org/abs/2404.09980