Paper Reading AI Learner

Storytelling of Photo Stream with Bidirectional Multi-thread Recurrent Neural Network

2016-06-02 11:13:04
Yu Liu, Jianlong Fu, Tao Mei, Chang Wen Chen

Abstract

Visual storytelling aims to generate human-level narrative language (i.e., a natural paragraph with multiple sentences) from a photo streams. A typical photo story consists of a global timeline with multi-thread local storylines, where each storyline occurs in one different scene. Such complex structure leads to large content gaps at scene transitions between consecutive photos. Most existing image/video captioning methods can only achieve limited performance, because the units in traditional recurrent neural networks (RNN) tend to "forget" the previous state when the visual sequence is inconsistent. In this paper, we propose a novel visual storytelling approach with Bidirectional Multi-thread Recurrent Neural Network (BMRNN). First, based on the mined local storylines, a skip gated recurrent unit (sGRU) with delay control is proposed to maintain longer range visual information. Second, by using sGRU as basic units, the BMRNN is trained to align the local storylines into the global sequential timeline. Third, a new training scheme with a storyline-constrained objective function is proposed by jointly considering both global and local matches. Experiments on three standard storytelling datasets show that the BMRNN model outperforms the state-of-the-art methods.

Abstract (translated)

视觉叙事旨在从照片流中生成人类叙事语言(即具有多个句子的自然段落)。一个典型的照片故事由全球时间线和多线程本地故事情节组成,其中每个情节发生在一个不同的场景中。这种复杂的结构在连续照片之间的场景转换处导致大的内容空白。大多数现有的图像/视频字幕方法只能达到有限的性能,因为传统递归神经网络(RNN)中的单元倾向于在视觉序列不一致时“忘记”以前的状态。在本文中,我们提出了一种新颖的双向多线程递归神经网络(BMRNN)的视觉叙事方法。首先,根据已开采的本地故事情节,提出一种带延时控制的跳过门控循环单元(sGRU),以保持较长距离的视觉信息。其次,通过使用sGRU作为基本单位,BMRNN接受培训,将本地故事情节整合到全球连续时间表中。第三,通过联合考虑全球和地方的匹配,提出了一种新的具有故事情节约束目标函数的训练方案。在三个标准故事讲述数据集上的实验表明,BMRNN模型胜过了最先进的方法。

URL

https://arxiv.org/abs/1606.00625

PDF

https://arxiv.org/pdf/1606.00625.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot