Paper Reading AI Learner

Look, Listen, and Answer: Overcoming Biases for Audio-Visual Question Answering

2024-04-18 09:16:02
Jie Ma, Min Hu, Pinghui Wang, Wangchun Sun, Lingyun Song, Hongbin Pei, Jun Liu, Youtian Du
         

Abstract

Audio-Visual Question Answering (AVQA) is a complex multi-modal reasoning task, demanding intelligent systems to accurately respond to natural language queries based on audio-video input pairs. Nevertheless, prevalent AVQA approaches are prone to overlearning dataset biases, resulting in poor robustness. Furthermore, current datasets may not provide a precise diagnostic for these methods. To tackle these challenges, firstly, we propose a novel dataset, \textit{MUSIC-AVQA-R}, crafted in two steps: rephrasing questions within the test split of a public dataset (\textit{MUSIC-AVQA}) and subsequently introducing distribution shifts to split questions. The former leads to a large, diverse test space, while the latter results in a comprehensive robustness evaluation on rare, frequent, and overall questions. Secondly, we propose a robust architecture that utilizes a multifaceted cycle collaborative debiasing strategy to overcome bias learning. Experimental results show that this architecture achieves state-of-the-art performance on both datasets, especially obtaining a significant improvement of 9.68\% on the proposed dataset. Extensive ablation experiments are conducted on these two datasets to validate the effectiveness of the debiasing strategy. Additionally, we highlight the limited robustness of existing multi-modal QA methods through the evaluation on our dataset.

Abstract (translated)

音频视觉问答(AVQA)是一个复杂的多模态推理任务,要求智能系统根据音频-视频输入对对自然语言查询进行准确响应。然而,普遍的AVQA方法容易过拟合数据集偏差,导致鲁棒性差。此外,现有数据集可能无法提供这些方法的准确诊断。为了应对这些挑战,我们首先提出了一个名为\textit{MUSIC-AVQA-R}的新数据集,通过两个步骤进行制作:在公共数据集\textit{MUSIC-AVQA}的测试划分内重新表述问题,然后引入分布变换来分割问题。前者导致一个大型、多样化的测试空间,而后者在罕见、频繁和总体问题上的评估具有全面性。其次,我们提出了一个具有多方面合作的循环去偏策略来克服偏差学习。实验结果表明,这种架构在两个数据集上都实现了最先进的性能,尤其是针对所提出的数据集,性能提高了9.68%。此外,我们对这两个数据集进行了广泛的消融实验,以验证去偏策略的有效性。最后,通过在我们的数据集上评估现有多模态问答方法的有效性,我们揭示了这些方法的局限性。

URL

https://arxiv.org/abs/2404.12020

PDF

https://arxiv.org/pdf/2404.12020.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot