Abstract
We propose a visually grounded speech model that acquires new words and their visual depictions from just a few word-image example pairs. Given a set of test images and a spoken query, we ask the model which image depicts the query word. Previous work has simplified this problem by either using an artificial setting with digit word-image pairs or by using a large number of examples per class. We propose an approach that can work on natural word-image pairs but with less examples, i.e. fewer shots. Our approach involves using the given word-image example pairs to mine new unsupervised word-image training pairs from large collections of unlabelled speech and images. Additionally, we use a word-to-image attention mechanism to determine word-image similarity. With this new model, we achieve better performance with fewer shots than any existing approach.
Abstract (translated)
我们提出了一种视觉扎实的语音模型,该模型从少量单词图像示例 pairs 中获取新单词及其视觉描述。给定一组测试图像和口语查询,我们询问模型哪个图像描述了查询单词。先前的研究通过使用数字单词图像示例或使用每个类别大量示例来简化了这个问题。我们提出了一种方法,可以在自然单词图像示例中工作,但使用更少的例子,即更少的拍摄次数。我们的方法涉及使用给定的单词图像示例 pairs 从大型未标记语音和图像集合中挖掘新的无监督单词图像训练对。此外,我们使用单词到图像注意力机制来确定单词图像相似性。通过使用这个新模型,我们比任何现有方法更少的拍摄次数实现了更好的性能。
URL
https://arxiv.org/abs/2305.15937