Abstract
The development of Audio Description (AD) has been a pivotal step forward in making video content more accessible and inclusive. Traditionally, AD production has demanded a considerable amount of skilled labor, while existing automated approaches still necessitate extensive training to integrate multimodal inputs and tailor the output from a captioning style to an AD style. In this paper, we introduce an automated AD generation pipeline that harnesses the potent multimodal and instruction-following capacities of GPT-4V(ision). Notably, our methodology employs readily available components, eliminating the need for additional training. It produces ADs that not only comply with established natural language AD production standards but also maintain contextually consistent character information across frames, courtesy of a tracking-based character recognition module. A thorough analysis on the MAD dataset reveals that our approach achieves a performance on par with learning-based methods in automated AD production, as substantiated by a CIDEr score of 20.5.
Abstract (translated)
音频描述(AD)的发展是一个使视频内容更加可访问和包容的重要一步。传统上,AD制作需要大量专业劳动,而现有的自动方法仍然需要对多模态输入进行广泛培训,并对句尾风格从字幕风格定制为AD风格。在本文中,我们介绍了一个利用GPT-4V(Vision)的多模态和指令跟随能力来自动生成AD的管道。值得注意的是,我们的方法采用 readily available的组件,无需额外培训。它生产的AD符合已建立的自然语言AD制作标准,并且由于一个跟踪为基础的字符识别模块,保持帧之间的上下文一致的字符信息。对MAD数据集的深入分析证实,我们的方法在自动AD制作方面的表现与基于学习的method相当,据CIDEr分数为20.5。
URL
https://arxiv.org/abs/2405.00983