Paper Reading AI Learner

DCASE 2019: CNN depth analysis with different channel inputs for Acoustic Scene Classification

2019-06-10 09:32:41
Javier Naranjo-Alcazar, Sergi Perez-Castanos, Pedro Zuccarello, Maximo Cobos

Abstract

The objective of this technical report is to describe the framework used in Task 1, Acoustic scene classification (ASC), of the DCASE 2019 challenge. The presented approach is based on Log-Mel spectrogram representations and VGG-based Convolutional Neural Networks (CNNs). Three different CNNs, with very similar architectures, have been implemented. Themain difference is the number of filters in their convolutional blocks. Experiments show that the depth of the network is not the most relevant factor for improving the accuracy of the results.The performance seems to be more sensitive to the input audio representation. This conclusion is important for the implementation of real-time audio recognition and classification systemon edge devices. In the presented experiments the best audio representation is the Log-Mel spectrogram of the harmonic andpercussive sources plus the Log-Mel spectrogram of the difference between left and right stereo-channels. Also, in order to improve accuracy, ensemble methods combining different model predictions with different inputs are explored. Besides geometric and arithmetic means, ensembles aggregated with the Orness Weighted Averaged (OWA) operator have shown interesting andnovel results. The proposed framework outperforms the baseline system by 14.34 percentage points. For Task 1a, the obtained development accuracy is 76.84 percent, being 62.5 percent the baseline, whereas the accuracy obtained in public leaderboard is 77.33 percent,being 64.33 percent the baseline.

Abstract (translated)

本技术报告的目标是描述DCASE 2019挑战任务1“声学场景分类(ASC)”中使用的框架。该方法基于对数梅尔谱图表示和基于VGG的卷积神经网络(CNN)。已经实现了三种不同的CNN,其架构非常相似。主要区别在于卷积块中的滤波器数量。实验表明,网络的深度不是提高结果准确性的最重要因素,其性能对输入音频的表示更为敏感。这一结论对于实现边缘设备的实时音频识别和分类系统具有重要意义。在所提出的实验中,最佳的音频表现是谐波和冲击源的对数-梅尔谱图加上左右立体声通道之间差异的对数-梅尔谱图。此外,为了提高精度,还探索了将不同模型预测与不同输入相结合的集成方法。除了几何方法和算术方法外,用ORNESS加权平均(OWA)算符聚合的系综也得到了有趣的、新颖的结果。建议的框架比基线系统高14.34个百分点。对于任务1a,获得的发展准确度为76.84%,为基线的62.5%,而公共排行榜获得的准确度为77.33%,为基线的64.33%。

URL

https://arxiv.org/abs/1906.04591

PDF

https://arxiv.org/pdf/1906.04591.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot