Abstract
State-of-the-art abstractive summarization systems often generate \emph{hallucinations}; i.e., content that is not directly inferable from the source text. Despite being assumed incorrect, many of the hallucinated contents are consistent with world knowledge (factual hallucinations). Including these factual hallucinations into a summary can be beneficial in providing additional background information. In this work, we propose a novel detection approach that separates factual from non-factual hallucinations of entities. Our method is based on an entity's prior and posterior probabilities according to pre-trained and finetuned masked language models, respectively. Empirical results suggest that our method vastly outperforms three strong baselines in both accuracy and F1 scores and has a strong correlation with human judgments on factuality classification tasks. Furthermore, our approach can provide insight into whether a particular hallucination is caused by the summarizer's pre-training or fine-tuning step.
Abstract (translated)
URL
https://arxiv.org/abs/2109.09784