Paper Reading AI Learner

Transformer based unsupervised pre-training for acoustic representation learning

2020-07-29 05:11:09
Ruixiong Zhang, Haiwei Wu, Wubo Li, Dongwei Jiang, Wei Zou, Xiangang Li

Abstract

Computational audio analysis has become a central issue in associated areas of research and a variety of related applications arised. However, for many acoustic tasks, the labeled data size may be limited. To handle this problem, We propose an unsupervised pre-training method using Transformer based encoder to learn a general and robust high-level representation for all acoustic tasks. Experiments have been conducted on three kinds of acoustic tasks: speech translation, speech emotion recognition and sound event detection. All the experiments have shown that pre-training using its own training data can significantly make the model converge faster and improve the performance. With a larger pre-training data combining MuST-C, Librispeech and ESC-US datasets, for speech translation, the BLEU score can further improve relatively 12.2% on En-De dataset and 8.4% on En-Fr datasets. For sound event detection, the F1 score can further improve absolutely 1.7% on DCASE2018 task5 development set and 2.4% on evaluation set. For speech emotion recognition, the UAR can further improve absolutely 4.3% on IEMOCAP dataset

Abstract (translated)

URL

https://arxiv.org/abs/2007.14602

PDF

https://arxiv.org/pdf/2007.14602.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot