Abstract
Representing text into a multidimensional space can be done with sentence embedding models such as Sentence-BERT (SBERT). However, training these models when the data has a complex multilevel structure requires individually trained class-specific models, which increases time and computing costs. We propose a two step approach which enables us to map sentences according to their hierarchical memberships and polarity. At first we teach the upper level sentence space through an AdaCos loss function and then finetune with a novel loss function mainly based on the cosine similarity of intra-level pairs. We apply this method to three different datasets: two weakly supervised Big Five personality dataset obtained from English and Japanese Twitter data and the benchmark MNLI dataset. We show that our single model approach performs better than multiple class-specific classification models.
Abstract (translated)
将文本映射到多维度空间可以使用句子嵌入模型,例如句子BERT(SBERT)。然而,训练这些数据具有复杂多层次结构需要单独训练每个类别的模型,这会增加时间和计算成本。我们提出了一种两步方法,可以帮助我们按照句子的层级成员和极性进行映射。首先,我们通过AdaCos损失函数教授更高级别的句子空间,然后通过一种基于内部级别对对之间的 cosine相似度的新损失函数进行微调。我们应用这种方法到三个不同的数据集:从英语和日语推特数据中提取的两个弱监督的Big Five人格数据集,以及基准MNLI数据集。我们表明,我们的单个模型方法比多个类别特定分类模型表现更好。
URL
https://arxiv.org/abs/2305.05748