Paper Reading AI Learner

Hierarchical LSTM with Adjusted Temporal Attention for Video Captioning

2017-06-05 08:09:20
Jingkuan Song, Zhao Guo, Lianli Gao, Wu Liu, Dongxiang Zhang, Heng Tao Shen

Abstract

Recent progress has been made in using attention based encoder-decoder framework for video captioning. However, most existing decoders apply the attention mechanism to every generated word including both visual words (e.g., "gun" and "shooting") and non-visual words (e.g. "the", "a"). However, these non-visual words can be easily predicted using natural language model without considering visual signals or attention. Imposing attention mechanism on non-visual words could mislead and decrease the overall performance of video captioning. To address this issue, we propose a hierarchical LSTM with adjusted temporal attention (hLSTMat) approach for video captioning. Specifically, the proposed framework utilizes the temporal attention for selecting specific frames to predict the related words, while the adjusted temporal attention is for deciding whether to depend on the visual information or the language context information. Also, a hierarchical LSTMs is designed to simultaneously consider both low-level visual information and high-level language context information to support the video caption generation. To demonstrate the effectiveness of our proposed framework, we test our method on two prevalent datasets: MSVD and MSR-VTT, and experimental results show that our approach outperforms the state-of-the-art methods on both two datasets.

Abstract (translated)

近来在使用基于注意力的编码器 - 解码器框架进行视频字幕方面取得了进展。然而,大多数现有解码器将注意机制应用于包括视觉词(例如“枪”和“射击”)和非视觉词(例如“the”,“a”)的每个生成的词。然而,这些非视觉单词可以很容易地使用自然语言模型进行预测,而不用考虑视觉信号或注意力。对非视觉词汇强加注意机制可能会误导和降低视频字幕的整体表现。为了解决这个问题,我们提出了一种具有调整时间关注的分级LSTM(hLSTMat)方法来处理视频字幕。具体来说,所提出的框架利用时间关注来选择特定帧来预测相关词语,而调整后的时间关注是用于决定是否依赖于视觉信息或语言情境信息。另外,分级LSTM被设计为同时考虑低级视觉信息和高级语言上下文信息以支持视频字幕生成。为了证明我们提出的框架的有效性,我们在两个流行数据集上测试了我们的方法:MSVD和MSR-VTT,实验结果表明我们的方法在两个数据集上均优于最先进的方法。

URL

https://arxiv.org/abs/1706.01231

PDF

https://arxiv.org/pdf/1706.01231.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot