Paper Reading AI Learner

Align and Attend: Multimodal Summarization with Dual Contrastive Losses

2023-03-13 17:01:42
Bo He, Jun Wang, Jielin Qiu, Trung Bui, Abhinav Shrivastava, Zhaowen Wang

Abstract

The goal of multimodal summarization is to extract the most important information from different modalities to form summaries. Unlike unimodal summarization, the multimodal summarization task explicitly leverages cross-modal information to help generate more reliable and high-quality summaries. However, existing methods fail to leverage the temporal correspondence between different modalities and ignore the intrinsic correlation between different samples. To address this issue, we introduce Align and Attend Multimodal Summarization (A2Summ), a unified multimodal transformer-based model which can effectively align and attend the multimodal input. In addition, we propose two novel contrastive losses to model both inter-sample and intra-sample correlations. Extensive experiments on two standard video summarization datasets (TVSum and SumMe) and two multimodal summarization datasets (Daily Mail and CNN) demonstrate the superiority of A2Summ, achieving state-of-the-art performances on all datasets. Moreover, we collected a large-scale multimodal summarization dataset BLiSS, which contains livestream videos and transcribed texts with annotated summaries. Our code and dataset are publicly available at ~\url{this https URL}.

Abstract (translated)

多感官摘要的目标是从不同感官中提取最重要的信息,以形成摘要。与单感官摘要不同,多感官摘要任务 explicitly 利用了跨感官信息,以帮助生成更为可靠和高质量的摘要。然而,现有的方法未能充分利用不同感官之间的时间对应关系,并忽略了不同样本之间的内在相关性。为了解决这个问题,我们介绍了 align andAttend 多感官摘要生成器 (A2Summ),它是一个统一的数据转换器模型,能够有效地 align 和 attend 多感官输入。此外,我们提出了两个新的对抗损失,以建模 inter-sample 和intra-sample corrlation。在两个标准视频摘要数据集 (TVSum和SumMe) 和两个多感官摘要数据集 (《每日邮报》和CNN) 上的广泛实验证明了 A2Summ 的优越性,在所有数据集上实现了最先进的性能。此外,我们收集了一个大规模的多感官摘要数据集 BLiSS,其中包括实时视频和转写文本,带有注释摘要。我们的代码和数据集可在 ~this https URL 上公开可用。

URL

https://arxiv.org/abs/2303.07284

PDF

https://arxiv.org/pdf/2303.07284.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot