Abstract
We consider Large-Scale Multi-Label Text Classification (LMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, annotated with ~4.3k EUROVOC labels, which is suitable for LMTC, few- and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with label-wise attention perform better than other current state of the art methods. Domain-specific WORD2VEC and context-sensitive ELMO embeddings further improve performance. We also find that considering only particular zones of the documents is sufficient. This allows us to bypass BERT's maximum text length limit and fine-tune BERT, obtaining the best results in all but zero-shot learning cases.
Abstract (translated)
我们考虑在法律领域中大规模的多标签文本分类(LMTC)。我们从eurlex发布了一个新的57k份立法文件数据集,标注了约4.3k个eurovoc标签,适用于LMTC、少镜头和零镜头学习。通过对几种神经分类器的实验,我们发现具有标签智能注意力的Bigrus比其他当前最先进的方法表现得更好。特定于域的word2vec和上下文敏感的elmo嵌入进一步提高了性能。我们还发现,仅考虑文件的特定区域就足够了。这允许我们绕过伯特的最大文本长度限制和微调伯特,在除了零镜头学习的情况下获得最佳结果。
URL
https://arxiv.org/abs/1906.02192