Abstract
Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even in the case of unlabeled data, their acquisition presents challenges in certain domains due to various reasons. To address these issues, we present SynCSE, a contrastive learning framework that trains sentence embeddings with synthesized data. Specifically, we explore utilizing large language models to synthesize the required data samples for contrastive learning, including (1) producing positive and negative annotations given unlabeled sentences (SynCSE-partial), and (2) generating sentences along with their corresponding annotations from scratch (SynCSE-scratch). Experimental results on sentence similarity and reranking tasks indicate that both SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines, and SynCSE-partial even achieves comparable performance to the supervised models in most settings.
Abstract (translated)
对比学习一直是训练高级句子嵌入的主要方法。以往的研究通常通过使用人类标注的自然语言推断(NLI)数据或通过未标记的大句子进行无监督学习。然而,即使在未标记数据的情况下,获取它们仍然存在各种挑战,因为各种原因。为了解决这些问题,我们提出了 SynCSE,一个对比学习框架,使用合成数据训练句子嵌入。具体来说,我们探索使用大型语言模型合成对比学习所需的数据样本,包括(1)根据未标记句子产生正则化和负则化注释(SynCSE-partial),以及(2)从头生成句子及其相应的注释(SynCSE- scratch)。在句子相似性和重新排序任务的实验结果中,表明 SynCSE-partial 和 SynCSE-Scratch 远远超过了无监督基准,且 SynCSE-partial 在大多数情况下实现了与监督模型的相当性能。
URL
https://arxiv.org/abs/2305.15077