Abstract
Lately, propelled by the phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well curated and challenging benchmarks are crucial. However, most benchmarks are English only and in legal NLP specifically there is no multilingual benchmark available yet. Additionally, many benchmarks are saturated, with the best models clearly outperforming the best humans and achieving near perfect scores. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To provide a fair comparison, we propose two aggregate scores, one based on the datasets and one on the languages. The best baseline (XLM-R large) achieves both a dataset aggregate score a language aggregate score of 61.3. This indicates that LEXTREME is still very challenging and leaves ample room for improvement. To make it easy for researchers and practitioners to use, we release LEXTREME on huggingface together with all the code required to evaluate models and a public Weights and Biases project with all the runs.
Abstract (translated)
近年来,由于Transformer架构的惊人进展,法律自然语言处理领域取得了显著的增长。为了衡量进展,良好的基准和具有挑战性的基准是至关重要的。然而,大多数基准都是只用英语的,而法律自然语言处理领域 specifically 还没有可用的多语言基准。此外,许多基准已经饱和,最好的模型明显比最好的人类表现更好,并达到几乎完美的评分。我们调查了法律自然语言处理文献,并选择涵盖24种语言的11个数据集,创造了LEXTREME。为了提供公平的比较,我们提出了两个总分数,一个基于数据集,一个基于语言。最好的基线(XLM-R large)同时实现了数据集总分数和语言总分数61.3。这表明LEXTREME仍然非常具有挑战性,但仍有很多改进空间。为了让研究人员和从业者容易使用,我们在Hugging Face上发布了LEXTREME,同时发布了评估模型所需的所有代码,以及所有运行中的公共权重和偏差项目。
URL
https://arxiv.org/abs/2301.13126