Abstract
Multimodal machine learning algorithms aim to learn visual-textual correspondences. Previous work suggests that concepts with concrete visual manifestations may be easier to learn than concepts with abstract ones. We give an algorithm for automatically computing the visual concreteness of words and topics within multimodal datasets. We apply the approach in four settings, ranging from image captions to images/text scraped from historical books. In addition to enabling explorations of concepts in multimodal datasets, our concreteness scores predict the capacity of machine learning algorithms to learn textual/visual relationships. We find that 1) concrete concepts are indeed easier to learn; 2) the large number of algorithms we consider have similar failure cases; 3) the precise positive relationship between concreteness and performance varies between datasets. We conclude with recommendations for using concreteness scores to facilitate future multimodal research.
Abstract (translated)
多模态机器学习算法旨在学习视觉 - 文本对应。以前的工作表明,具有具体视觉表现的概念可能比抽象概念更容易学习。我们给出了一种算法,用于自动计算多模式数据集中单词和主题的视觉具体性。我们将这种方法应用于四种设置中,从图像标题到历史书籍中的图像/文本。除了对多模式数据集中的概念进行探索之外,我们的具体性分数还预测了机器学习算法学习文本/视觉关系的能力。我们发现:1)具体概念确实更容易学习; 2)我们考虑的大量算法有类似的失败案例; 3)数据集之间的具体性和性能之间的确切的正面关系是不同的。最后,我们推荐使用具体分数来促进未来的多模式研究。
URL
https://arxiv.org/abs/1804.06786