Abstract
We present a new method to learn video representations from unlabeled data. Given large-scale unlabeled video data, the objective is to benefit from such data by learning a generic and transferable representation space that can be directly used for a new task such as zero/few-shot learning. We formulate our unsupervised representation learning as a multi-modal, multi-task learning problem, where the representations are also shared across different modalities via distillation. Further, we also introduce the concept of finding a better loss function to train such multi-task multi-modal representation space using an evolutionary algorithm; our method automatically searches over different combinations of loss functions capturing multiple (self-supervised) tasks and modalities. Our formulation allows for the distillation of audio, optical flow and temporal information into a single, RGB-based convolutional neural network. We also compare the effects of using additional unlabeled video data and evaluate our representation learning on standard public video datasets.
Abstract (translated)
本文提出了一种从未标记数据中学习视频表示的新方法。对于大规模的未标记视频数据,目标是通过学习可直接用于新任务(如零/少镜头学习)的通用可转移表示空间,从这些数据中获益。我们将我们的无监督表示学习描述为一个多模式、多任务的学习问题,其中表示也通过蒸馏在不同的模式中共享。此外,我们还引入了寻找更好的损失函数的概念,用进化算法训练这种多任务多模态表示空间;我们的方法自动搜索捕获多个(自监督)任务和模态的不同损失函数组合。我们的公式允许将音频、光流和时间信息蒸馏成一个基于RGB的单卷积神经网络。我们还比较了使用额外的未标记视频数据的效果,并评估了我们在标准公共视频数据集上的表示学习。
URL
https://arxiv.org/abs/1906.03248