Paper Reading AI Learner

HAAV: Hierarchical Aggregation of Augmented Views for Image Captioning

2023-05-25 17:50:17
Chia-Wen Kuo, Zsolt Kira


A great deal of progress has been made in image captioning, driven by research into how to encode the image using pre-trained models. This includes visual encodings (e.g. image grid features or detected objects) and more recently textual encodings (e.g. image tags or text descriptions of image regions). As more advanced encodings are available and incorporated, it is natural to ask: how to efficiently and effectively leverage the heterogeneous set of encodings? In this paper, we propose to regard the encodings as augmented views of the input image. The image captioning model encodes each view independently with a shared encoder efficiently, and a contrastive loss is incorporated across the encoded views in a novel way to improve their representation quality and the model's data efficiency. Our proposed hierarchical decoder then adaptively weighs the encoded views according to their effectiveness for caption generation by first aggregating within each view at the token level, and then across views at the view level. We demonstrate significant performance improvements of +5.6% CIDEr on MS-COCO and +12.9% CIDEr on Flickr30k compared to state of the arts, and conduct rigorous analyses to demonstrate the importance of each part of our design.

Abstract (translated)

图像标题生成取得了很大的进展,这受到研究如何用预训练模型编码图像的推动。包括视觉编码(例如图像网格特征或检测到的对象)以及最近文本编码(例如图像标签或图像区域文本描述)。随着更先进的编码器和将其集成到系统中的方法的可用性和实现,人们自然地问:如何高效和有效地利用这些不同类型的编码器?在本文中,我们提议将编码视为输入图像的增强视图。图像标题生成模型以共享编码器的方式高效地每个视图独立编码,并以一种新颖的方式在编码视图之间引入Contrastive Loss,以提高它们的表示质量和模型的数据效率。我们提出的Hierarchical Decoder然后自适应地根据图像标题生成的效率对编码视图进行加权,首先在 token 级别内对每个视图进行聚合,然后在整个视图级别上进行加权。我们证明了在MS-COCO和Flickr30k上相对于艺术状态的显著性能改进,即+5.6%和+12.9%的CER,并进行了严格的分析,以证明我们设计的每个部分的重要性。



3D Action Action_Localization Action_Recognition Activity Adversarial Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot