Abstract
Machine learning (ML) systems in natural language processing (NLP) face significant challenges in generalizing to out-of-distribution (OOD) data, where the test distribution differs from the training data distribution. This poses important questions about the robustness of NLP models and their high accuracy, which may be artificially inflated due to their underlying sensitivity to systematic biases. Despite these challenges, there is a lack of comprehensive surveys on the generalization challenge from an OOD perspective in text classification. Therefore, this paper aims to fill this gap by presenting the first comprehensive review of recent progress, methods, and evaluations on this topic. We furth discuss the challenges involved and potential future research directions. By providing quick access to existing work, we hope this survey will encourage future research in this area.
Abstract (translated)
自然语言处理(NLP)中的机器学习(ML)系统在将测试分布与训练数据分布不同的数据进行泛化时面临重大挑战。这提出了对NLP模型稳定性和高精度的重要问题,可能是由于它们的固有对系统偏差敏感性导致的人为夸大。尽管面临着这些挑战,但在文本分类中从OOD角度进行泛化挑战的全面调查缺乏。因此,本文旨在填补这一空缺,并首先提供关于这个话题的最新进展、方法和评估的全面综述。我们最后讨论了所涉及的挑战和未来研究的方向。通过提供快速访问现有工作的渠道,我们希望这 survey 将鼓励该领域的未来研究。
URL
https://arxiv.org/abs/2305.14104