Abstract
Despite their renowned predictive power on i.i.d. data, convolutional neural networks are known to rely more on high-frequency patterns that humans deem superficial than on low-frequency patterns that agree better with intuitions about what constitutes category membership. This paper proposes a method for training robust convolutional networks by penalizing the predictive power of the local representations learned by earlier layers. Intuitively, our networks are forced to discard predictive signals such as color and texture that can be gleaned from local receptive fields and to rely instead on the global structures of the image. Across a battery of synthetic and benchmark domain adaptation tasks, our method confers improved generalization out of the domain. Also, to evaluate cross-domain transfer, we introduce ImageNet-Sketch, a new dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale.
Abstract (translated)
尽管卷积神经网络在I.I.D.数据上有着著名的预测能力,但我们知道,卷积神经网络更多地依赖于人类认为表面化的高频模式,而不是那些更符合类别成员的直觉的低频模式。本文提出了一种训练鲁棒卷积网络的方法,通过惩罚早期层学习的局部表示的预测能力。直观地说,我们的网络被迫放弃诸如颜色和纹理之类的预测信号,这些信号可以从本地接收字段中收集到,而依赖于图像的全局结构。在一系列合成和基准领域适应任务中,我们的方法提供了改进的领域外泛化。此外,为了评估跨域传输,我们引入了ImageNet Sketch,这是一个由类似草图的图像组成的新数据集,它与ImageNet分类验证集的类别和比例相匹配。
URL
https://arxiv.org/abs/1905.13549