Abstract
Multimodal learning assumes all modality combinations of interest are available during training to learn cross-modal correspondences. In this paper, we challenge this modality-complete assumption for multimodal learning and instead strive for generalization to unseen modality combinations during inference. We pose the problem of unseen modality interaction and introduce a first solution. It exploits a feature projection module to project the multidimensional features of different modalities into a common space with rich information reserved. This allows the information to be accumulated with a simple summation operation across available modalities. To reduce overfitting to unreliable modality combinations during training, we further improve the model learning with pseudo-supervision indicating the reliability of a modality's prediction. We demonstrate that our approach is effective for diverse tasks and modalities by evaluating it for multimodal video classification, robot state regression, and multimedia retrieval.
Abstract (translated)
多模态学习假设训练期间感兴趣的所有模态组合都可用来学习跨模态对应关系。在本文中,我们挑战了多模态学习中的模态完备假设,并试图在推理期间 generalization 到未观测的模态组合。我们提出了一个问题,即未观测的模态互动的问题,并介绍了一种解决方案。它利用特征投影模块将不同模态的多维特征投影到具有丰富信息的 common 空间中。这使得可以使用简单的累加操作在可用的模态组合中积累信息。为了在训练期间减少对不可靠的模态组合的过度拟合,我们进一步改进了模型学习,通过伪监督表明一个模态的预测可靠性。我们证明了我们的方法和不同的任务和模态都有效,例如多模态视频分类、机器人状态回归和多媒体检索。
URL
https://arxiv.org/abs/2306.12795