Abstract
We introduce Spatial-Temporal Memory Networks for video object detection. At its core, a novel Spatial-Temporal Memory module (STMM) serves as the recurrent computation unit to model long-term temporal appearance and motion dynamics. The STMM's design enables full integration of pretrained backbone CNN weights, which we find to be critical for accurate detection. Furthermore, in order to tackle object motion in videos, we propose a novel MatchTrans module to align the spatial-temporal memory from frame to frame. Our method produces state-of-the-art results on the benchmark ImageNet VID dataset, and our ablative studies clearly demonstrate the contribution of our different design choices. We release our code and models at <a href="http://fanyix.cs.ucdavis.edu/project/stmn/project.html.">this http URL</a>
Abstract (translated)
我们介绍了用于视频对象检测的空间 - 时间记忆网络。其核心是一种新颖的时空记忆模块(STMM)作为反复计算单元,用于模拟长期时间外观和运动动力学。 STMM的设计实现了预训练骨干CNN重量的完全集成,我们发现这对于准确检测至关重要。此外,为了解决视频中的对象运动,我们提出了一种新颖的MatchTrans模块,用于在帧与帧之间对齐空间 - 时间记忆。我们的方法在基准ImageNet VID数据集上产生最先进的结果,我们的烧蚀研究清楚地证明了我们不同设计选择的贡献。我们在<a href="http://fanyix.cs.ucdavis.edu/project/stmn/project.html.">此http网址</a>发布我们的代码和模型
URL
https://arxiv.org/abs/1712.06317