Abstract
When neural networks are confronted with unfamiliar data that deviate from their training set, this signifies a domain shift. While these networks output predictions on their inputs, they typically fail to account for their level of familiarity with these novel observations. This challenge becomes even more pronounced in resource-constrained settings, such as embedded systems or edge devices. To address such challenges, we aim to recalibrate a neural network's decision boundaries in relation to its cognizance of the data it observes, introducing an approach we coin as certainty distillation. While prevailing works navigate unsupervised domain adaptation (UDA) with the goal of curtailing model entropy, they unintentionally birth models that grapple with calibration inaccuracies - a dilemma we term the over-certainty phenomenon. In this paper, we probe the drawbacks of this traditional learning model. As a solution to the issue, we propose a UDA algorithm that not only augments accuracy but also assures model calibration, all while maintaining suitability for environments with limited computational resources.
Abstract (translated)
当神经网络面对不熟悉的数据,这些数据与训练集存在偏差时,这表示领域发生了转移。虽然这些网络在输出其输入预测方面是有效的,但它们通常无法考虑到它们对这种新观察的熟悉程度。在资源受限的环境(如嵌入式系统或边缘设备)中,这个问题变得更加突出。为了应对这类挑战,我们旨在通过我们称之为确定性蒸馏的方法重新调整神经网络的决策边界与它所观察到的数据的认知之间的关系,从而提高模型的不确定性。虽然现有的工作通过无监督的领域适应(UDA)来尝试限制模型的熵,但它们无意中产生了那些挣扎于标定不准确性的模型 - 我们称之为过度信心现象。在本文中,我们探讨了这种传统学习模型的不足之处。为了解决问题,我们提出了一个UDA算法,该算法不仅增加了准确性,而且确保了模型的标定准确性,同时保持了对资源受限环境的适用性。
URL
https://arxiv.org/abs/2404.16168