Abstract
The rapid advancement of native multi-modal models and omni-models, exemplified by GPT-4o, Gemini, and o3, with their capability to process and generate content across modalities such as text and images, marks a significant milestone in the evolution of intelligence. Systematic evaluation of their multi-modal output capabilities in visual thinking processes (also known as multi-modal chain of thought, M-CoT) becomes critically important. However, existing benchmarks for evaluating multi-modal models primarily focus on assessing multi-modal inputs and text-only reasoning while neglecting the importance of reasoning through multi-modal outputs. In this paper, we present a benchmark, dubbed RBench-V, designed to assess models' vision-indispensable reasoning abilities. To construct RBench-V, we carefully hand-pick 803 questions covering math, physics, counting, and games. Unlike previous benchmarks that typically specify certain input modalities, RBench-V presents problems centered on multi-modal outputs, which require image manipulation such as generating novel images and constructing auxiliary lines to support the reasoning process. We evaluate numerous open- and closed-source models on RBench-V, including o3, Gemini 2.5 Pro, Qwen2.5-VL, etc. Even the best-performing model, o3, achieves only 25.8% accuracy on RBench-V, far below the human score of 82.3%, highlighting that current models struggle to leverage multi-modal reasoning. Data and code are available at this https URL
Abstract (translated)
本土多模态模型和全模态模型(如GPT-4o、Gemini和o3)的迅速发展,它们能够处理并生成包括文本和图像在内的跨模态内容,标志着智能演进的重要里程碑。这些模型在视觉思考过程中的多模态输出能力评估(也称为多模态思维链,M-CoT)变得至关重要。然而,现有的用于评估多模态模型的基准测试主要集中在评估多模态输入和纯文本推理上,并忽略了通过多模态输出进行推理的重要性。为此,在这篇论文中我们提出了一项新的基准测试RBench-V,旨在评估模型不可或缺的视觉推理能力。 为了构建RBench-V,我们精心挑选了涵盖数学、物理、计数及游戏等领域的803个问题。不同于以往的基准测试通常指定特定的输入模式,RBench-V提出了以多模态输出为中心的问题,这些问题要求对图像进行操作,如生成新的图像和构造辅助线来支持推理过程。我们在RBench-V上评估了包括o3、Gemini 2.5 Pro、Qwen2.5-VL等在内的众多开源及闭源模型。即使是表现最佳的模型,o3,在RBench-V上的准确率也仅为25.8%,远低于人类平均分82.3%的表现,这凸显了当前模型在利用多模态推理方面存在挑战。 数据和代码可在[此处](https://this_https_URL.com)获取。
URL
https://arxiv.org/abs/2505.16770