Abstract
Lipreading involves using visual data to recognize spoken words by analyzing the movements of the lips and surrounding area. It is a hot research topic with many potential applications, such as human-machine interaction and enhancing audio speech recognition. Recent deep-learning based works aim to integrate visual features extracted from the mouth region with landmark points on the lip contours. However, employing a simple combination method such as concatenation may not be the most effective approach to get the optimal feature vector. To address this challenge, firstly, we propose a cross-attention fusion-based approach for large lexicon Arabic vocabulary to predict spoken words in videos. Our method leverages the power of cross-attention networks to efficiently integrate visual and geometric features computed on the mouth region. Secondly, we introduce the first large-scale Lip Reading in the Wild for Arabic (LRW-AR) dataset containing 20,000 videos for 100-word classes, uttered by 36 speakers. The experimental results obtained on LRW-AR and ArabicVisual databases showed the effectiveness and robustness of the proposed approach in recognizing Arabic words. Our work provides insights into the feasibility and effectiveness of applying lipreading techniques to the Arabic language, opening doors for further research in this field. Link to the project page: this https URL
Abstract (translated)
lip-reading 是一种使用视觉数据来识别口语单词的方法,通过分析嘴唇及其周围区域的运动。这是一个热门的研究课题,具有许多潜在应用,如人机交互和增强音频语音识别。最近基于深度学习的工作试图将嘴部区域提取的视觉特征与嘴轮廓上的地标点相结合。然而,采用简单的组合方法(如连接)可能不是获得最佳特征向量的最有效方法。为了应对这个挑战,我们首先提出了一个基于跨注意力的方法,用于预测视频中的口语单词。我们的方法利用了跨注意力的网络的力量,有效地将嘴部区域计算的视觉和几何特征集成在一起。其次,我们引入了第一个大型的阿拉伯语(LRW-AR)数据集中包含的阿拉伯语(LRW-AR)数据集,其中包括100个词语 class 的20,000个视频,由36个不同的说话者朗读。在LRW-AR和阿拉伯视觉数据库上进行实验得出的结果表明,所提出的方案在识别阿拉伯单词方面具有有效性和稳健性。我们的工作揭示了将 lip-reading 技术应用于阿拉伯语的潜力和效果,为这个领域进一步的研究打开了大门。相关项目页面:此链接
URL
https://arxiv.org/abs/2402.11520