Abstract
Visual Language Models (VLMs) have rapidly progressed with the recent success of large language models. However, there have been few attempts to incorporate efficient linear Recurrent Neural Networks (RNNs) architectures into VLMs. In this study, we introduce VisualRWKV, the first application of a linear RNN model to multimodal learning tasks, leveraging the pre-trained RWKV language model. We propose a data-dependent recurrence and sandwich prompts to enhance our modeling capabilities, along with a 2D image scanning mechanism to enrich the processing of visual sequences. Extensive experiments demonstrate that VisualRWKV achieves competitive performance compared to Transformer-based models like LLaVA-1.5 on various benchmarks. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at the following GitHub repository: \href{this https URL}{this https URL}.
Abstract (translated)
视觉语言模型(VLMs)在大型语言模型的最近成功中迅速发展。然而,尚未有将有效的线性循环神经网络(RNN)架构融入VLM的尝试。在本文中,我们引入了VisualRWKV,第一个将线性RNN模型应用于多模态学习任务的实例,利用预训练的RWKV语言模型。我们提出了基于数据的递归和三明治提示来增强我们的建模能力,并采用二维图像扫描机制来丰富视觉序列的加工处理。大量实验证明,VisualRWKV在各种基准测试中都实现了与Transformer基模型如LLaVA-1.5相当的竞争性能。为了促进进一步的研究和分析,我们将检查点和相关代码公开到以下GitHub存储库中:\href{this https URL}{this https URL}。
URL
https://arxiv.org/abs/2406.13362