Abstract
This paper serves to introduce the Align, Minimize and Diversify (AMD) method, a Source-Free Unsupervised Domain Adaptation approach for Handwritten Text Recognition (HTR). This framework decouples the adaptation process from the source data, thus not only sidestepping the resource-intensive retraining process but also making it possible to leverage the wealth of pre-trained knowledge encoded in modern Deep Learning architectures. Our method explicitly eliminates the need to revisit the source data during adaptation by incorporating three distinct regularization terms: the Align term, which reduces the feature distribution discrepancy between source and target data, ensuring the transferability of the pre-trained representation; the Minimize term, which encourages the model to make assertive predictions, pushing the outputs towards one-hot-like distributions in order to minimize prediction uncertainty, and finally, the Diversify term, which safeguards against the degeneracy in predictions by promoting varied and distinctive sequences throughout the target data, preventing informational collapse. Experimental results from several benchmarks demonstrated the effectiveness and robustness of AMD, showing it to be competitive and often outperforming DA methods in HTR.
Abstract (translated)
本文旨在介绍一种名为Align、Minimize和Diversify(AMD)的方法,一种用于手写文本识别(HTR)的源自由无监督领域适应方法。该框架将适应过程与源数据解耦,从而不仅避免了资源密集的重新训练过程,而且还可以利用现代深度学习架构中编码的丰富知识。我们的方法明确消除在适应过程中重新访问源数据的需求,通过引入三个不同的正则化项实现:Align项,该项减少了源数据和目标数据之间的特征分布差异,确保了预训练表示的转移性;Minimize项,该项鼓励模型做出果断的预测,将输出推向one-hot-like分布,以最小化预测不确定性;最后,Diversify项,通过促进目标数据中多样且具有差异的序列,防止信息坍塌,从而保护预测的可靠性。来自多个基准测试的实验结果表明,AMD的有效性和鲁棒性,使其在HTR领域与DA方法竞争, often outperforming them。
URL
https://arxiv.org/abs/2404.18260