Paper Reading AI Learner

Video Captioning with Guidance of Multimodal Latent Topics

2017-09-02 15:34:44
Shizhe Chen, Jia Chen, Qin Jin, Alexander Hauptmann

Abstract

The topic diversity of open-domain videos leads to various vocabularies and linguistic expressions in describing video contents, and therefore, makes the video captioning task even more challenging. In this paper, we propose an unified caption framework, M&M TGM, which mines multimodal topics in unsupervised fashion from data and guides the caption decoder with these topics. Compared to pre-defined topics, the mined multimodal topics are more semantically and visually coherent and can reflect the topic distribution of videos better. We formulate the topic-aware caption generation as a multi-task learning problem, in which we add a parallel task, topic prediction, in addition to the caption task. For the topic prediction task, we use the mined topics as the teacher to train a student topic prediction model, which learns to predict the latent topics from multimodal contents of videos. The topic prediction provides intermediate supervision to the learning process. As for the caption task, we propose a novel topic-aware decoder to generate more accurate and detailed video descriptions with the guidance from latent topics. The entire learning procedure is end-to-end and it optimizes both tasks simultaneously. The results from extensive experiments conducted on the MSR-VTT and Youtube2Text datasets demonstrate the effectiveness of our proposed model. M&M TGM not only outperforms prior state-of-the-art methods on multiple evaluation metrics and on both benchmark datasets, but also achieves better generalization ability.

Abstract (translated)

开放域视频的主题多样性导致描述视频内容时出现各种词汇表和语言表达,因此使视频字幕任务更具挑战性。在本文中,我们提出了一个统一的字幕框架,M&M TGM,它以无监督的方式从数据中挖掘多模态话题,并引导字幕译码器处理这些话题。与预先定义的主题相比,挖掘的多模式主题在语义和视觉上更加一致,并且可以更好地反映视频的主题分布。我们将主题感知字幕生成制定为多任务学习问题,其中除了字幕任务之外,还添加并行任务,主题预测。对于主题预测任务,我们使用挖掘出来的主题作为教师来训练一个学生主题预测模型,该模型通过视频的多模式内容学习预测潜在主题。话题预测为学习过程提供了中间监督。至于字幕任务,我们提出了一种新颖的主题感知解码器,以潜在主题为指导,生成更加准确和详细的视频描述。整个学习过程是端到端的,它同时优化了两个任务。在MSR-VTT和Youtube2Text数据集上进行的大量实验的结果证明了我们提出的模型的有效性。 M&M TGM不仅在多个评估指标和两个基准数据集上都优于先前的最先进方法,而且还实现了更好的泛化能力。

URL

https://arxiv.org/abs/1708.09667

PDF

https://arxiv.org/pdf/1708.09667.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot