Abstract
Toxic comment detection on social media has proven to be essential for content moderation. This paper compares a wide set of different models on a highly skewed multi-label hate speech dataset. We consider inference time and several metrics to measure performance and bias in our comparison. We show that all BERTs have similar performance regardless of the size, optimizations or language used to pre-train the models. RNNs are much faster at inference than any of the BERT. BiLSTM remains a good compromise between performance and inference time. RoBERTa with Focal Loss offers the best performance on biases and AUROC. However, DistilBERT combines both good AUROC and a low inference time. All models are affected by the bias of associating identities. BERT, RNN, and XLNet are less sensitive than the CNN and Compact Convolutional Transformers.
Abstract (translated)
社交媒体上的有毒评论检测已经证明对于内容 moderation 至关重要。本文在一个高度偏倚的多标签仇恨言论数据集上比较了多种不同的模型。我们考虑了推理时间和多个指标来测量我们的比较模型的性能和偏差。我们表明,无论模型大小、优化或训练语言如何,所有 BERT 模型都具有类似的性能。RNN 在推理速度上比任何 BERT 模型都更快。BiLSTM 仍然是性能与推理时间的最佳妥协。RoBERTa 使用 Focal Loss 提供了在偏差和 AUROC 上的最佳表现,但DistilBERT 结合了良好的 AUROC 和低推理时间。所有模型都受到关联身份偏差的影响。BERT、RNN 和 XLNet 比 CNN 和紧凑卷积神经网络更不敏感。
URL
https://arxiv.org/abs/2301.11125