Abstract
A short clip of video may contain progression of multiple events and an interesting story line. A human need to capture both the event in every shot and associate them together to understand the story behind it. In this work, we present a new multi-shot video understanding benchmark Shot2Story20K with detailed shot-level captions and comprehensive video summaries. To facilitate better semantic understanding of videos, we provide captions for both visual signals and human narrations. We design several distinct tasks including single-shot video and narration captioning, multi-shot video summarization, and video retrieval with shot descriptions. Preliminary experiments show some challenges to generate a long and comprehensive video summary. Nevertheless, the generated imperfect summaries can already significantly boost the performance of existing video understanding tasks such as video question-answering, promoting an under-explored setting of video understanding with detailed summaries.
Abstract (translated)
一段短视频可能包含多个事件的进展和有趣的故事线。人类需要捕捉每个镜头中的事件,并将它们联系在一起,以理解其背后的故事。在这项工作中,我们提出了一个新的多镜头视频理解基准Shot2Story20K,带有详细的镜头级别字幕和全面的视频摘要。为了促进更好地语义理解视频,我们提供了视觉信号和人类叙述的 caption。我们设计了几种不同的任务,包括单镜头视频和叙述性 captioning,多镜头视频摘要和带有描述的图像检索。初步实验表明,生成一个长且全面的视频摘要存在一些挑战。然而,生成的不完美的摘要已经可以显著提高现有视频理解任务的性能,如视频问答,探索了一个未被探索的视频理解设置,带有详细的摘要。
URL
https://arxiv.org/abs/2312.10300