Abstract
Distributed representations of text can be used as features when training a statistical classifier. These representations may be created as a composition of word vectors or as context-based sentence vectors. We compare the two kinds of representations (word versus context) for three classification problems: influenza infection classification, drug usage classification and personal health mention classification. For statistical classifiers trained for each of these problems, context-based representations based on ELMo, Universal Sentence Encoder, Neural-Net Language Model and FLAIR are better than Word2Vec, GloVe and the two adapted using the MESH ontology. There is an improvement of 2-4% in the accuracy when these context-based representations are used instead of word-based representations.
Abstract (translated)
在训练统计分类器时,文本的分布式表示可以用作特征。这些表示可以创建为单词向量的组合,也可以创建为基于上下文的句子向量。我们比较了三种分类问题的两种表示(词与上下文):流感感染分类、药物使用分类和个人健康提及分类。对于每一个问题训练的统计分类器,基于ELMO、通用句子编码器、神经网络语言模型和FLAIR的基于上下文的表示优于Word2vec、Glove和采用网格本体的二者。当使用这些基于上下文的表示而不是基于单词的表示时,精确度提高了2-4%。
URL
https://arxiv.org/abs/1906.05468