Abstract
A trustworthy real-world prediction system should be well-calibrated; that is, its confidence in an answer is indicative of the likelihood that the answer is correct, enabling deferral to a more expensive expert in cases of low-confidence predictions. While recent studies have shown that unsupervised pre-training produces large language models (LMs) that are remarkably well-calibrated, the most widely-used LMs in practice are fine-tuned with reinforcement learning with human feedback (RLHF-LMs) after the initial unsupervised pre-training stage, and results are mixed as to whether these models preserve the well-calibratedness of their ancestors. In this paper, we conduct a broad evaluation of computationally feasible methods for extracting confidence scores from LLMs fine-tuned with RLHF. We find that with the right prompting strategy, RLHF-LMs verbalize probabilities that are much better calibrated than the model's conditional probabilities, enabling fairly well-calibrated predictions. Through a combination of prompting strategy and temperature scaling, we find that we can reduce the expected calibration error of RLHF-LMs by over 50%.
Abstract (translated)
一个可靠的现实世界预测系统应该进行精确的校准。也就是说,其对答案的的信心反映了答案是否正确的可能性,从而能够在低信心预测的情况下将答案推迟到更昂贵的专家那里。尽管最近的研究表明,未监督的前训练产生大型语言模型(LMs)表现得非常校准,但在实践中,最常用的LMs是在最初未监督的前训练阶段通过强化学习与人类反馈(RLHF-LMs)进行微调的,结果好坏不一,这些模型是否保持了其祖先的校准性仍待验证。在本文中,我们对所有可行的计算方式进行了广泛的评估,以提取与RLHF-LMs微调后进行强化学习与人类反馈(RLHF-LMs)的信心评分。我们发现,通过适当的提示策略,RLHF-LMs用更校准的概率表示了模型的条件概率,使其能够进行相当校准的预测。通过结合提示策略和温度 scaling,我们发现,我们可以将RLHF-LMs的预期校准误差降低超过50%。
URL
https://arxiv.org/abs/2305.14975