Abstract
Progress in machine learning has been driven in large part by massive increases in data. However, large web-scale datasets such as LAION are largely uncurated beyond searches for exact duplicates, potentially leaving much redundancy. Here, we introduce SemDeDup, a method which leverages embeddings from pre-trained models to identify and remove semantic duplicates: data pairs which are semantically similar, but not exactly identical. Removing semantic duplicates preserves performance and speeds up learning. Analyzing a subset of LAION, we show that SemDeDup can remove 50% of the data with minimal performance loss, effectively halving training time. Moreover, performance increases out of distribution. Also, analyzing language models trained on C4, a partially curated dataset, we show that SemDeDup improves over prior approaches while providing efficiency gains. SemDeDup provides an example of how simple ways of leveraging quality embeddings can be used to make models learn faster with less data.
Abstract (translated)
机器学习的进展很大程度上受到了数据的大规模增加推动。然而,像LAION这样的大型互联网数据集大多没有进行必要的编辑,即使在寻找精确复制的数据方面也可能存在大量冗余。在此介绍SemDeDup方法,该方法利用训练好的模型嵌入来识别并删除语义上的重复数据:数据对,语义上相似,但并非完全相同的数据对。删除语义上的重复可以保持性能并加快学习。对LAION的一个子集进行分析,我们发现SemDeDup可以在没有显著性能损失的情况下删除50%的数据,实现了训练时间的减半。此外,性能从分布中向上增加。还对C4一个半编辑的数据集上训练的语言模型进行分析,发现SemDeDup比先前方法改进了,同时提供了效率增益。SemDeDup提供了一个利用高质量嵌入的简单方法,用更少的数据让模型更快地学习的例子。
URL
https://arxiv.org/abs/2303.09540