Abstract
PCANet and its variants provided good accuracy results for classification tasks. However, despite the importance of network depth in achieving good classification accuracy, these networks were trained with a maximum of nine layers. In this paper, we introduce a residual compensation convolutional network, which is the first PCANet-like network trained with hundreds of layers while improving classification accuracy. The design of the proposed network consists of several convolutional layers, each followed by post-processing steps and a classifier. To correct the classification errors and significantly increase the network's depth, we train each layer with new labels derived from the residual information of all its preceding layers. This learning mechanism is accomplished by traversing the network's layers in a single forward pass without backpropagation or gradient computations. Our experiments on four distinct classification benchmarks (MNIST, CIFAR-10, CIFAR-100, and TinyImageNet) show that our deep network outperforms all existing PCANet-like networks and is competitive with several traditional gradient-based models.
Abstract (translated)
PCANet及其变体在分类任务中提供了良好的准确性结果。然而,尽管网络深度在实现良好的分类精度方面非常重要,但这些网络最多只训练了九层。在本文中,我们介绍了一种残留卷积神经网络,它是第一个在提高分类精度的同时训练数百层的PCANet-like网络。 proposed network的设计包括几个卷积层,每个卷积层后是 post-processing steps 和分类器。为了纠正分类错误并显著增加网络的深度,我们每个层训练从其前几个层残留的信息中提取的新标签。这种学习机制是通过在一次forward pass中穿越网络的层来实现的,而不需要反向传播或梯度计算。我们对四个不同的分类基准(米NIST、CIFAR-10、CIFAR-100和tinyImageNet)进行的试验表明,我们的深度网络在所有现有的PCANet-like网络中表现更好,并与几个传统的基于梯度的方法竞争。
URL
https://arxiv.org/abs/2301.11663