Paper Reading AI Learner

HowToCaption: Prompting LLMs to Transform Video Annotations at Scale

2023-10-07 19:32:55
Nina Shvetsova, Anna Kukleva, Xudong Hong, Christian Rupprecht, Bernt Schiele, Hilde Kuehne

Abstract

Instructional videos are an excellent source for learning multimodal representations by leveraging video-subtitle pairs extracted with automatic speech recognition systems (ASR) from the audio signal in the videos. However, in contrast to human-annotated captions, both speech and subtitles naturally differ from the visual content of the videos and thus provide only noisy supervision for multimodal learning. As a result, large-scale annotation-free web video training data remains sub-optimal for training text-video models. In this work, we propose to leverage the capability of large language models (LLMs) to obtain fine-grained video descriptions aligned with videos. Specifically, we prompt an LLM to create plausible video descriptions based on ASR narrations of the video for a large-scale instructional video dataset. To this end, we introduce a prompting method that is able to take into account a longer text of subtitles, allowing us to capture context beyond a single sentence. To align the captions to the video temporally, we prompt the LLM to generate timestamps for each produced caption based on the subtitles. In this way, we obtain human-style video captions at scale without human supervision. We apply our method to the subtitles of the HowTo100M dataset, creating a new large-scale dataset, HowToCaption. Our evaluation shows that the resulting captions not only significantly improve the performance over many different benchmark datasets for text-video retrieval but also lead to a disentangling of textual narration from the audio, boosting performance in text-video-audio tasks.

Abstract (translated)

教学视频是一个非常好的学习多模态表示的方法,通过利用自动语音识别系统(ASR)从视频的音频信号中提取的与视频一起的文本字幕对。然而,与人类标注的旁白相比,语音和字幕的自然区别在于视频的视觉内容,因此为多模态学习提供了嘈杂的监督。因此,大规模无标注的视频训练数据仍然不足以训练文本-视频模型。在这项工作中,我们试图利用大型语言模型的能力,根据ASR旁白生成与视频相符的精细视频描述。具体来说,我们提示一个大型语言模型(LLM)根据视频的ASR旁白创建合理的视频描述,以训练大规模教学视频数据集。为此,我们引入了一种提示方法,能够考虑字幕的较长文本,使我们能够捕捉到句子之外的信息。为了将旁白与视频的时间轴对齐,我们要求LLM根据每个产生的旁白生成相应的时刻。这样,我们在规模上获得了人类风格的视频旁白。我们将我们的方法应用于HowTo100M数据集中的旁白,创建了一个新的大型数据集HowToCaption。我们的评估显示,生成的旁白不仅显著提高了许多不同基准数据集的文本-视频检索性能,而且还有助于将文本叙述与音频分离,提高文本-音频任务中的性能。

URL

https://arxiv.org/abs/2310.04900

PDF

https://arxiv.org/pdf/2310.04900.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot