Abstract
Visual speech recognition (VSR) is the task of recognizing spoken language from video input only, without any audio. VSR has many applications as an assistive technology, especially if it could be deployed in mobile devices and embedded systems. The need of intensive computational resources and large memory footprint are two of the major obstacles in developing neural network models for VSR in a resource constrained environment. We propose a novel end-to-end deep neural network architecture for word level VSR called MobiVSR with a design parameter that aids in balancing the model's accuracy and parameter count. We use depthwise-separable 3D convolution for the first time in the domain of VSR and show how it makes our model efficient. MobiVSR achieves an accuracy of 73\% on a challenging Lip Reading in the Wild dataset with 6 times fewer parameters and 20 times lesser memory footprint than the current state of the art. MobiVSR can also be compressed to 6 MB by applying post training quantization.
Abstract (translated)
视觉语音识别(VSR)的任务是只从视频输入中识别口语,而不需要任何音频。VSR作为一种辅助技术有许多应用,特别是如果它可以部署在移动设备和嵌入式系统中。在资源受限的环境下,计算资源的密集性和内存占用的大是开发VSR神经网络模型的两大障碍。我们提出了一种新的用于字级VSR的端到端深度神经网络结构,称为mobivsr,其设计参数有助于平衡模型的精度和参数计数。在VSR领域中,我们首次使用了非纵向可分离的三维卷积,并展示了它如何使我们的模型有效。Mobivsr在野外数据集中具有挑战性的唇读上达到73%的准确度,参数比当前技术水平少6倍,内存足迹比现在少20倍。Mobivsr也可以通过应用训练后量化压缩到6 MB。
URL
https://arxiv.org/abs/1905.03968