Abstract
Time-aware encoding of frame sequences in a video is a fundamental problem in video understanding. While many attempted to model time in videos, an explicit study on quantifying video time is missing. To fill this lacuna, we aim to evaluate video time explicitly. We describe three properties of video time, namely a) temporal asymmetry, b)temporal continuity and c) temporal causality. Based on each we formulate a task able to quantify the associated property. This allows assessing the effectiveness of modern video encoders, like C3D and LSTM, in their ability to model time. Our analysis provides insights about existing encoders while also leading us to propose a new video time encoder, which is better suited for the video time recognition tasks than C3D and LSTM. We believe the proposed meta-analysis can provide a reasonable baseline to assess video time encoders on equal grounds on a set of temporal-aware tasks.
Abstract (translated)
视频中的帧序列的时间感知编码是视频理解中的基本问题。虽然许多人试图在视频中对时间进行建模,但缺少量化视频时间的明确研究。为填补这一空白,我们的目标是明确评估视频时间。我们描述了视频时间的三个属性,即a)时间不对称性,b)时间连续性和c)时间因果关系。基于每个我们制定一个能够量化相关属性的任务。这样可以评估现代视频编码器(如C3D和LSTM)在模拟时间方面的有效性。我们的分析提供了有关现有编码器的见解,同时也引导我们提出一种新的视频时间编码器,它比C3D和LSTM更适合视频时间识别任务。我们相信,所提出的荟萃分析可以提供合理的基线,以便在一组时间感知任务上以相同的理由评估视频时间编码器。
URL
https://arxiv.org/abs/1807.06980