Abstract
Distinguishing in- and out-of-distribution (OOD) inputs is crucial for reliable deployment of classification systems. However, OOD data is typically unavailable or difficult to collect, posing a significant challenge for accurate OOD detection. In this work, we present a method that harnesses the generative capabilities of Large Language Models (LLMs) to create high-quality synthetic OOD proxies, eliminating the dependency on any external OOD data source. We study the efficacy of our method on classical text classification tasks such as toxicity detection and sentiment classification as well as classification tasks arising in LLM development and deployment, such as training a reward model for RLHF and detecting misaligned generations. Extensive experiments on nine InD-OOD dataset pairs and various model sizes show that our approach dramatically lowers false positive rates (achieving a perfect zero in some cases) while maintaining high accuracy on in-distribution tasks, outperforming baseline methods by a significant margin.
Abstract (translated)
区分内分布(In-Distribution,ID)和外分布(Out-of-Distribution,OOD)输入对于分类系统的可靠部署至关重要。然而,OOD数据通常难以获取或收集,这给准确的OOD检测带来了重大挑战。在这项工作中,我们提出了一种方法,该方法利用大规模语言模型(LLMs)的生成能力来创建高质量的合成OOD代理,从而消除了对任何外部OOD数据源的依赖。我们在经典文本分类任务(如毒性检测和情感分类)以及在LLM开发和部署中出现的分类任务(例如为RLHF训练奖励模型和检测不一致生成)上研究了我们方法的有效性。在九组InD-OOD数据集对及各种规模模型上的大量实验表明,我们的方法显著降低了假阳性率(在某些情况下达到了完美的零),同时保持了高准确度的内分布任务表现,大幅超越了基线方法。
URL
https://arxiv.org/abs/2502.03323