Abstract
In this paper, we introduce ResNetVLLM (ResNet Vision LLM), a novel cross-modal framework for zero-shot video understanding that integrates a ResNet-based visual encoder with a Large Language Model (LLM. ResNetVLLM addresses the challenges associated with zero-shot video models by avoiding reliance on pre-trained video understanding models and instead employing a non-pretrained ResNet to extract visual features. This design ensures the model learns visual and semantic representations within a unified architecture, enhancing its ability to generate accurate and contextually relevant textual descriptions from video inputs. Our experimental results demonstrate that ResNetVLLM achieves state-of-the-art performance in zero-shot video understanding (ZSVU) on several benchmarks, including MSRVTT-QA, MSVD-QA, TGIF-QA FrameQA, and ActivityNet-QA.
Abstract (translated)
在这篇论文中,我们介绍了ResNetVLLM(基于ResNet的视觉大型语言模型),这是一种新的跨模态框架,用于零样本视频理解。该框架将基于ResNet的视觉编码器与大型语言模型(LLM)相结合。ResNetVLLM通过避免依赖于预训练的视频理解模型,并转而使用未经预训练的ResNet来提取视觉特征,解决了零样本视频模型所面临的挑战。这一设计确保了模型能够在统一架构内学习视觉和语义表示,从而提高了从视频输入中生成准确且上下文相关的文本描述的能力。我们的实验结果表明,ResNetVLLM在多个基准测试(包括MSRVTT-QA、MSVD-QA、TGIF-QA FrameQA 和 ActivityNet-QA)上的零样本视频理解(ZSVU)方面达到了最先进的性能水平。
URL
https://arxiv.org/abs/2504.14432