Abstract
We commonly use agreement measures to assess the utility of judgements made by human annotators in Natural Language Processing (NLP) tasks. While inter-annotator agreement is frequently used as an indication of label reliability by measuring consistency between annotators, we argue for the additional use of intra-annotator agreement to measure label stability over time. However, in a systematic review, we find that the latter is rarely reported in this field. Calculating these measures can act as important quality control and provide insights into why annotators disagree. We propose exploratory annotation experiments to investigate the relationships between these measures and perceptions of subjectivity and ambiguity in text items.
Abstract (translated)
我们通常使用一致性指标来衡量人类编辑者在自然语言处理(NLP)任务中的判断 utility。虽然跨编辑者一致性通常通过测量一致性来衡量标签可靠性,但我们主张使用内部编辑者一致性来测量标签的稳定性,以时间稳定性作为指标。然而,在系统性综述中,我们发现后者在该领域中很少报告。计算这些指标可以作为一种重要的质量控制,提供编辑者不同意的原因的见解。我们建议进行探索性编辑实验,以研究这些指标与文本items中主观和歧义感知之间的关系。
URL
https://arxiv.org/abs/2301.10684