Abstract
Learning quality document embeddings is a fundamental problem in natural language processing (NLP), information retrieval (IR), recommendation systems, and search engines. Despite recent advances in the development of transformer-based models that produce sentence embeddings with self-contrastive learning, the encoding of long documents (Ks of words) is still challenging with respect to both efficiency and quality considerations. Therefore, we train Longfomer-based document encoders using a state-of-the-art unsupervised contrastive learning method (SimCSE). Further on, we complement the baseline method -- siamese neural network -- with additional convex neural networks based on functional Bregman divergence aiming to enhance the quality of the output document representations. We show that overall the combination of a self-contrastive siamese network and our proposed neural Bregman network outperforms the baselines in two linear classification settings on three long document topic classification tasks from the legal and biomedical domains.
Abstract (translated)
学习高质量的文档嵌入是自然语言处理(NLP)、信息检索(IR)、推荐系统、搜索引擎等应用领域中的 fundamental problem。尽管Transformer-based模型的开发取得了 recent advances,生成具有自对比学习功能的语句嵌入,但对于较长的文档(单词数量)的编码仍然具有效率和质量方面的挑战。因此,我们使用一种先进的无监督对比学习方法(SimCSE)训练基于Longfomer的文档编码器。接着,我们使用基于功能梯度散射的凸神经网络作为基方法,并补充了以提高输出文档表示质量的 additional 凸神经网络。我们证明了, overall,自对比的Siamese网络和我们的神经网络梯度散射网络的组合在三个法律和生物医学领域Longdocument主题分类任务中的两个线性分类设置中比基方法表现更好。
URL
https://arxiv.org/abs/2305.16031