Abstract
In this report, our approach to tackling the task of ActivityNet 2018 Kinetics-600 challenge is described in detail. Though spatial-temporal modelling methods, which adopt either such end-to-end framework as I3D \cite{i3d} or two-stage frameworks (i.e., CNN+RNN), have been proposed in existing state-of-the-arts for this task, video modelling is far from being well solved. In this challenge, we propose spatial-temporal network (StNet) for better joint spatial-temporal modelling and comprehensively video understanding. Besides, given that multi-modal information is contained in video source, we manage to integrate both early-fusion and later-fusion strategy of multi-modal information via our proposed improved temporal Xception network (iTXN) for video understanding. Our StNet RGB single model achieves 78.99\% top-1 precision in the Kinetics-600 validation set and that of our improved temporal Xception network which integrates RGB, flow and audio modalities is up to 82.35\%. After model ensemble, we achieve top-1 precision as high as 85.0\% on the validation set and rank No.1 among all submissions.
Abstract (translated)
在本报告中,我们详细描述了我们处理ActivityNet 2018 Kinetics-600挑战任务的方法。尽管已经在现有技术水平中提出了采用I3D \ cite {i3d}这样的端对端框架或者两阶段框架(即CNN + RNN)的空间 - 时间建模方法,这项任务,视频建模远未得到很好的解决。在这个挑战中,我们提出时空网络(StNet)来更好地联合时空建模和全面的视频理解。此外,考虑到视频源中包含多模态信息,我们通过我们提出的用于视频理解的改进时间Xception网络(iTXN)来设法整合多模态信息的早期融合和后期融合策略。我们的StNet RGB单模型在Kinetics-600验证集中达到了78.99%的top-1精度,而我们改进的时间Xception网络集成了RGB,流量和音频模式,达到了82.35%。在模型集成之后,我们在验证集上达到了最高为1的精度,达到了85.0%,并在所有提交中排名第一。
URL
https://arxiv.org/abs/1806.10319