Abstract
We consider the task of Extreme Multi-Label Text Classification (XMTC) in the legal domain. We release a new dataset of 57k legislative documents from EURLEX, the European Union's public document database, annotated with concepts from EUROVOC, a multidisciplinary thesaurus. The dataset is substantially larger than previous EURLEX datasets and suitable for XMTC, few-shot and zero-shot learning. Experimenting with several neural classifiers, we show that BIGRUs with self-attention outperform the current multi-label state-of-the-art methods, which employ label-wise attention. Replacing CNNs with BIGRUs in label-wise attention networks leads to the best overall performance.
Abstract (translated)
我们考虑在法律领域中极端多标签文本分类(XMTC)的任务。我们从欧盟公共文件数据库eurlex发布了一个新的57K立法文件数据集,并用eurovoc(一个多学科词典)的概念进行了注释。该数据集比以前的eurlex数据集大很多,适用于xmtc、少量放炮和零放炮学习。通过对几种神经分类器的实验,我们发现具有自我关注能力的bigrus优于目前采用标签式注意力的多标签最新方法。将CNN替换为标签式注意力网络中的Bigrus,可以获得最佳的整体性能。
URL
https://arxiv.org/abs/1905.10892