Paper Reading AI Learner

Cross-Attention Fusion of Visual and Geometric Features for Large Vocabulary Arabic Lipreading

2024-02-18 09:22:58
Samar Daou, Ahmed Rekik, Achraf Ben-Hamadou, Abdelaziz Kallel


Lipreading involves using visual data to recognize spoken words by analyzing the movements of the lips and surrounding area. It is a hot research topic with many potential applications, such as human-machine interaction and enhancing audio speech recognition. Recent deep-learning based works aim to integrate visual features extracted from the mouth region with landmark points on the lip contours. However, employing a simple combination method such as concatenation may not be the most effective approach to get the optimal feature vector. To address this challenge, firstly, we propose a cross-attention fusion-based approach for large lexicon Arabic vocabulary to predict spoken words in videos. Our method leverages the power of cross-attention networks to efficiently integrate visual and geometric features computed on the mouth region. Secondly, we introduce the first large-scale Lip Reading in the Wild for Arabic (LRW-AR) dataset containing 20,000 videos for 100-word classes, uttered by 36 speakers. The experimental results obtained on LRW-AR and ArabicVisual databases showed the effectiveness and robustness of the proposed approach in recognizing Arabic words. Our work provides insights into the feasibility and effectiveness of applying lipreading techniques to the Arabic language, opening doors for further research in this field. Link to the project page: this https URL

Abstract (translated)

lip-reading 是一种使用视觉数据来识别口语单词的方法,通过分析嘴唇及其周围区域的运动。这是一个热门的研究课题,具有许多潜在应用,如人机交互和增强音频语音识别。最近基于深度学习的工作试图将嘴部区域提取的视觉特征与嘴轮廓上的地标点相结合。然而,采用简单的组合方法(如连接)可能不是获得最佳特征向量的最有效方法。为了应对这个挑战,我们首先提出了一个基于跨注意力的方法,用于预测视频中的口语单词。我们的方法利用了跨注意力的网络的力量,有效地将嘴部区域计算的视觉和几何特征集成在一起。其次,我们引入了第一个大型的阿拉伯语(LRW-AR)数据集中包含的阿拉伯语(LRW-AR)数据集,其中包括100个词语 class 的20,000个视频,由36个不同的说话者朗读。在LRW-AR和阿拉伯视觉数据库上进行实验得出的结果表明,所提出的方案在识别阿拉伯单词方面具有有效性和稳健性。我们的工作揭示了将 lip-reading 技术应用于阿拉伯语的潜力和效果,为这个领域进一步的研究打开了大门。相关项目页面:此链接



3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot