Abstract
Acoustic scenes are rich and redundant in their content. In this work, we present a spatio-temporal attention pooling layer coupled with a convolutional recurrent neural network to learn from patterns that are discriminative while suppressing those that are irrelevant for acoustic scene classification. The convolutional layers in this network learn invariant features from time-frequency input. The bidirectional recurrent layers are then able to encode the temporal dynamics of the resulting convolutional features. Afterwards, a two-dimensional attention mask is formed via the outer product of the spatial and temporal attention vectors learned from two designated attention layers to weigh and pool the recurrent output into a final feature vector for classification. The network is trained with between-class examples generated from between-class data augmentation. Experiments demonstrate that the proposed method not only outperforms a strong convolutional neural network baseline but also sets new state-of-the-art performance on the LITIS Rouen dataset.
Abstract (translated)
声学场景的内容丰富而冗长。在这项工作中,我们提出了一个时空注意池层与卷积循环神经网络相结合,以学习模式的辨别,同时抑制那些无关的声音场景分类。该网络中的卷积层从时频输入中学习不变的特征。然后,双向循环层能够编码所产生卷积特征的时间动态。然后,通过从两个指定的注意层学习到的空间和时间注意向量的外积形成二维注意遮罩,将重复输出加权并汇集到最终的特征向量中进行分类。利用类间数据扩充生成的类间实例对网络进行训练。实验表明,该方法不仅优于强卷积神经网络基线,而且在Litis-Rouen数据集上具有新的先进性能。
URL
https://arxiv.org/abs/1904.03543