Abstract
Robot vision often involves a large computational load due to large images to process in a short amount of time. Existing solutions often involve reducing image quality which can negatively impact processing. Another approach is to generate regions of interest with expensive vision algorithms. In this paper, we evaluate how audio can be used to generate regions of interest in optical images. To achieve this, we propose a unique attention mechanism to localize speech sources and evaluate its impact on a face detection algorithm. Our results show that the attention mechanism reduces the computational load. The proposed pipeline is flexible and can be easily adapted for human-robot interactions, robot surveillance, video-conferences or smart glasses.
Abstract (translated)
机器人视觉往往由于处理大量图像需要在短时间内进行,而需要大量的计算资源。现有解决方案往往涉及到降低图像质量,这可能会对处理产生负面影响。另一种方法是使用昂贵的视觉算法生成感兴趣的区域。在本文中,我们评估了如何使用音频来生成光学图像感兴趣的区域。为了实现这一点,我们提出了一种独特的注意力机制,以定位语音来源,并评估它对人脸识别算法的影响。我们的结果表明,注意力机制可以降低计算资源。我们提出的管道是灵活的,可以轻松适应人类-机器人交互、机器人监控、视频通话或智能眼镜等场景。
URL
https://arxiv.org/abs/2309.08005