Abstract
Despite the success of deep neural network (DNN) on sequential data (i.e., scene text and speech) recognition, it suffers from the over-confidence problem mainly due to overfitting in training with the cross-entropy loss, which may make the decision-making less reliable. Confidence calibration has been recently proposed as one effective solution to this problem. Nevertheless, the majority of existing confidence calibration methods aims at non-sequential data, which is limited if directly applied to sequential data since the intrinsic contextual dependency in sequences or the class-specific statistical prior is seldom exploited. To the end, we propose a Context-Aware Selective Label Smoothing (CASLS) method for calibrating sequential data. The proposed CASLS fully leverages the contextual dependency in sequences to construct confusion matrices of contextual prediction statistics over different classes. Class-specific error rates are then used to adjust the weights of smoothing strength in order to achieve adaptive calibration. Experimental results on sequence recognition tasks, including scene text recognition and speech recognition, demonstrate that our method can achieve the state-of-the-art performance.
Abstract (translated)
尽管深度学习神经网络(DNN)在序列数据(如场景文本和语音识别)识别方面取得了成功,但它仍然面临着过度自信问题,这主要是因为在训练过程中过度拟合了交叉熵损失,这可能会导致决策变得不够可靠。最近,自信校准被提出了一种有效的解决方案。然而,大多数现有的自信校准方法的目标都是非序列数据,如果直接应用于序列数据,则受到限制,因为序列本身的内部上下文依赖或特定类别的统计先验很少被利用。最终,我们提出了一种Context-Aware selective Label Smoothing (CASLS)方法,用于校准序列数据。 proposed CASLS fully leverages the contextual dependency in sequences to construct confusion matrices of contextual prediction statistics over different classes。随后,特定类别的错误率被用于调整平滑强度的权重,以实现自适应校准。对序列识别任务(包括场景文本识别和语音识别)的实验结果表明,我们的方法和方法可以实现最先进的性能。
URL
https://arxiv.org/abs/2303.06946