Paper Reading AI Learner

TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation

2024-05-07 21:52:39
Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang

Abstract

Recent advances in diffusion-based generative modeling have led to the development of text-to-video (T2V) models that can generate high-quality videos conditioned on a text prompt. Most of these T2V models often produce single-scene video clips that depict an entity performing a particular action (e.g., `a red panda climbing a tree'). However, it is pertinent to generate multi-scene videos since they are ubiquitous in the real-world (e.g., `a red panda climbing a tree' followed by `the red panda sleeps on the top of the tree'). To generate multi-scene videos from the pretrained T2V model, we introduce Time-Aligned Captions (TALC) framework. Specifically, we enhance the text-conditioning mechanism in the T2V architecture to recognize the temporal alignment between the video scenes and scene descriptions. For instance, we condition the visual features of the earlier and later scenes of the generated video with the representations of the first scene description (e.g., `a red panda climbing a tree') and second scene description (e.g., `the red panda sleeps on the top of the tree'), respectively. As a result, we show that the T2V model can generate multi-scene videos that adhere to the multi-scene text descriptions and be visually consistent (e.g., entity and background). Further, we finetune the pretrained T2V model with multi-scene video-text data using the TALC framework. We show that the TALC-finetuned model outperforms the baseline methods by 15.5 points in the overall score, which averages visual consistency and text adherence using human evaluation. The project website is this https URL.

Abstract (translated)

近年来,基于扩散的生成建模的进步导致了许多基于文本提示的文本到视频(T2V)模型的开发。这些T2V模型通常会生成描述特定动作的视频片段(例如,`一只红熊猫爬树`)。然而,生成多场景视频(例如,`一只红熊猫爬树,然后它在树上睡觉`)是恰当的,因为它们在现实生活中非常普遍(例如,`一只红熊猫爬树` followed by `一只红熊猫在树上睡觉`)。为了从预训练的T2V模型中生成多场景视频,我们引入了时间同步捕获(TALC)框架。具体来说,我们增强T2V架构中文本条件机制,以识别视频场景和场景描述之间的时间对齐。例如,我们分别用第一场景描述(例如,`一只红熊猫爬树`)和第二场景描述(例如,`一只红熊猫在树上睡觉`)的表示条件视觉特征。结果表明,T2V模型可以生成符合多场景文本描述且具有视觉一致性的多场景视频(例如,实体和背景)。此外,我们使用TALC框架对预训练的T2V模型进行微调。我们发现,TALC微调的模型在整体得分上比基线方法高出15.5分,这通过人类评价来衡量视觉一致性和文本准确性。项目网站是https:// this URL。

URL

https://arxiv.org/abs/2405.04682

PDF

https://arxiv.org/pdf/2405.04682.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot