Abstract
In the past, Acoustic Scene Classification systems have been based on hand crafting audio features that are input to a classifier. Nowadays, the common trend is to adopt data driven techniques, e.g., deep learning, where audio representations are learned from data. In this paper, we propose a system that consists of a simple fusion of two methods of the aforementioned types: a deep learning approach where log-scaled mel-spectrograms are input to a convolutional neural network, and a feature engineering approach, where a collection of hand-crafted features is input to a gradient boosting machine. We first show that both methods provide complementary information to some extent. Then, we use a simple late fusion strategy to combine both methods. We report classification accuracy of each method individually and the combined system on the TUT Acoustic Scenes 2017 dataset. The proposed fused system outperforms each of the individual methods and attains a classification accuracy of 72.8% on the evaluation set, improving the baseline system by 11.8%.
Abstract (translated)
过去,声场景分类系统基于手工制作输入到分类器的音频特征。现在,通常的趋势是采用数据驱动技术,例如深度学习,即从数据中学习音频表示。在本文中,我们提出了一个由上述两种方法的简单融合组成的系统:一个深度学习方法,其中对数尺度的梅尔谱图被输入到卷积神经网络,以及一个特征工程方法,其中一个集合的手工制作的特征被输入到梯度增强机器。我们首先证明这两种方法在一定程度上提供了补充信息。然后,我们使用一个简单的后期融合策略来结合这两种方法。我们分别报告每种方法的分类准确性以及TUT Acoustic Scenes 2017数据集上的组合系统。所提出的融合系统胜过了每种单独的方法,并且在评估集上获得了72.8%的分类准确性,使基准系统改善了11.8%。
URL
https://arxiv.org/abs/1806.07506