Abstract
We propose to use captions from the Web as a previously underutilized resource for paraphrases (i.e., texts with the same "message") and to create and analyze a corresponding dataset. When an image is reused on the Web, an original caption is often assigned. We hypothesize that different captions for the same image naturally form a set of mutual paraphrases. To demonstrate the suitability of this idea, we analyze captions in the English Wikipedia, where editors frequently relabel the same image for different articles. The paper introduces the underlying mining technology and compares known paraphrase corpora with respect to their syntactic and semantic paraphrase similarity to our new resource. In this context, we introduce characteristic maps along the two similarity dimensions to identify the style of paraphrases coming from different sources. An annotation study demonstrates the high reliability of the algorithmically determined characteristic maps.
Abstract (translated)
我们提议利用Web上的标题作为以前未被充分利用的重写资源(即具有相同“消息”的文本),并创建和分析相应的数据集。当在互联网上重复使用图像时,通常需要为同一图像分配原始标题。我们假设不同标题对于同一图像自然形成一组相互重写的集合。为了证明这个想法的合适性,我们分析了英语维基百科上的标题,那里的编辑经常为相同的图像更改标题。该论文介绍了背后的挖掘技术,并比较了已知的重写集合和我们的新资源的语义和语法重写相似性。在这个背景下,我们介绍了两个相似性的维度上的特征地图,以确定来自不同来源的重写风格。注释研究展示了算法确定的特征地图的高可靠性。
URL
https://arxiv.org/abs/2301.11030