Abstract
Self-supervised video representation learning aimed at maximizing similarity between different temporal segments of one video, in order to enforce feature persistence over time. This leads to loss of pertinent information related to temporal relationships, rendering actions such as `enter' and `leave' to be indistinguishable. To mitigate this limitation, we propose Latent Time Navigation (LTN), a time-parameterized contrastive learning strategy that is streamlined to capture fine-grained motions. Specifically, we maximize the representation similarity between different video segments from one video, while maintaining their representations time-aware along a subspace of the latent representation code including an orthogonal basis to represent temporal changes. Our extensive experimental analysis suggests that learning video representations by LTN consistently improves performance of action classification in fine-grained and human-oriented tasks (e.g., on Toyota Smarthome dataset). In addition, we demonstrate that our proposed model, when pre-trained on Kinetics-400, generalizes well onto the unseen real world video benchmark datasets UCF101 and HMDB51, achieving state-of-the-art performance in action recognition.
Abstract (translated)
自我监督的视频表示学习旨在最大化不同视频中的时间片段之间的相似性,以强制实现特征持久化。这会导致与时间关系相关的相关信息的损失,使像“进入”和“离开”这样的动作变得难以区分。为了克服这一限制,我们提出了隐式时间导航(LTN),这是一种时间参数化的比较学习策略,简化以捕捉精细的运动。具体来说,我们最大化来自不同视频的不同视频片段之间的表示相似性,同时保持它们的表示时间aware,在一个包括表示时间变化Orthogonal basis的隐式表示代码 subspace上。我们的广泛实验分析表明,通过LTN学习视频表示 consistently 改善精细和人类目标任务(例如,在丰田智能家居数据集上)的行动分类性能。此外,我们证明,在我们基于Kinetics-400预训练的模型中,LTN训练可以很好地泛化到未观测的现实世界视频基准数据集UCF101和HMDB51,实现行动识别的最先进的性能。
URL
https://arxiv.org/abs/2305.06437