Abstract
Sentence Boundary Detection (SBD) has been a major research topic since Automatic Speech Recognition transcripts have been used for further Natural Language Processing tasks like Part of Speech Tagging, Question Answering or Automatic Summarization. But what about evaluation? Do standard evaluation metrics like precision, recall, F-score or classification error; and more important, evaluating an automatic system against a unique reference is enough to conclude how well a SBD system is performing given the final application of the transcript? In this paper we propose Window-based Sentence Boundary Evaluation (WiSeBE), a semi-supervised metric for evaluating Sentence Boundary Detection systems based on multi-reference (dis)agreement. We evaluate and compare the performance of different SBD systems over a set of Youtube transcripts using WiSeBE and standard metrics. This double evaluation gives an understanding of how WiSeBE is a more reliable metric for the SBD task.
Abstract (translated)
句子边界检测(SBD)一直是一个主要的研究课题,因为自动语音识别成绩单已被用于进一步的自然语言处理任务,如词性标注,问答或自动摘要。但评估呢?做标准评估指标,如精确度,召回率,F分数或分类错误;更重要的是,根据唯一参考资料评估自动系统足以得出SBD系统在最终应用成绩单时的表现如何?在本文中,我们提出了基于窗口的句子边界评估(WiSeBE),这是一种用于评估基于多参考(dis)协议的句子边界检测系统的半监督度量。我们使用WiSeBE和标准指标评估和比较不同SBD系统在一组Youtube转录本上的表现。此双重评估可帮助您了解WiSeBE如何成为SBD任务的更可靠指标。
URL
https://arxiv.org/abs/1808.08850