Paper Reading AI Learner

Adaptive Audio-Visual Speech Recognition via Matryoshka-Based Multimodal LLMs

2025-03-09 00:02:10
Umberto Cappellazzo, Minsu Kim, Stavros Petridis

Abstract

Audio-Visual Speech Recognition (AVSR) leverages both audio and visual modalities to enhance speech recognition robustness, particularly in noisy environments. Recent advancements in Large Language Models (LLMs) have demonstrated their effectiveness in speech recognition, including AVSR. However, due to the significant length of speech representations, direct integration with LLMs imposes substantial computational costs. Prior approaches address this by compressing speech representations before feeding them into LLMs. However, higher compression ratios often lead to performance degradation, necessitating a trade-off between computational efficiency and recognition accuracy. To address this challenge, we propose Llama-MTSK, the first Matryoshka-based Multimodal LLM for AVSR, which enables flexible adaptation of the audio-visual token allocation based on specific computational constraints while preserving high performance. Our approach, inspired by Matryoshka Representation Learning, encodes audio-visual representations at multiple granularities within a single model, eliminating the need to train separate models for different compression levels. Moreover, to efficiently fine-tune the LLM, we introduce three LoRA-based Matryoshka strategies using global and scale-specific LoRA modules. Extensive evaluations on the two largest AVSR datasets demonstrate that Llama-MTSK achieves state-of-the-art results, matching or surpassing models trained independently at fixed compression levels.

Abstract (translated)

音频-视觉语音识别(AVSR)利用声音和视觉两种模式来增强语音识别的鲁棒性,特别是在嘈杂环境中。最近,在大型语言模型(LLMs)在包括AVSR在内的语音识别方面的有效性得到了展示。然而,由于语音表示的长度较长,直接与LLMs集成会带来高昂的计算成本。先前的方法通过压缩语音表示后再将其输入到LLMs中来解决这一问题。但是,较高的压缩比率通常会导致性能下降,这需要在计算效率和识别准确性之间做出权衡。 为了解决这个问题,我们提出了Llama-MTSK——第一个基于Matryoshka原理的多模态大语言模型,用于AVSR任务。它能够根据特定的计算约束灵活地调整音频-视觉标记分配,同时保持高性能。我们的方法受到Matryoshka表征学习的启发,在单一模型内以多种粒度级别编码音频-视觉表示,从而无需为不同压缩水平训练单独的模型。 此外,为了高效地微调LLM,我们引入了三种基于LoRA(低秩适应)的方法:使用全局和特定尺度的LoRA模块。在两个最大的AVSR数据集上进行的广泛评估显示,Llama-MTSK实现了最先进的结果,其性能与固定压缩水平下独立训练的模型相匹配或超越。 总之,我们的研究展示了一种创新方法,通过灵活调整音频-视觉标记分配和采用先进的微调策略来应对计算效率和识别准确性之间的挑战。这为未来的AVSR任务提供了新的方向。

URL

https://arxiv.org/abs/2503.06362

PDF

https://arxiv.org/pdf/2503.06362.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot