Abstract
We describe a DNN for fine-grained action classification and video captioning. It gives state-of-the-art performance on the challenging Something-Something dataset, with over 220, 000 videos and 174 fine-grained actions. Classification and captioning on this dataset are challenging because of the subtle differences between actions, the use of thousands of different objects, and the diversity of captions penned by crowd actors. The model architecture shares features for classification and captioning, and is trained end-to-end. It performs much better than the existing classification benchmark for Something-Something, with impressive fine-grained results, and it yields a strong baseline on the new Something-Something captioning task. Our results reveal that there is a strong correlation between the degree of detail in the task and the ability of the learned features to transfer to other tasks.
Abstract (translated)
我们描述了细粒度动作分类和视频字幕的DNN。它在具有挑战性的Something-Something数据集上提供了最先进的性能,拥有超过220,000个视频和174个细致的操作。由于动作之间的微妙差异,成千上万种不同物体的使用以及人群演员所写的字幕的多样性,因此对该数据集进行分类和标题具有挑战性。该模型体系结构共享分类和字幕功能,并进行端对端培训。它比现有的Something-Something分类基准表现要好得多,并具有令人印象深刻的细致结果,并且它为新的Something-Something字幕任务产生了强大的基准。我们的研究结果表明,任务中的细节程度与学习功能转移到其他任务的能力之间有很强的相关性。
URL
https://arxiv.org/abs/1804.09235