Abstract
Recently, multimodal large language models (MLLMs) have attracted increasing research attention due to their powerful visual understanding capabilities. While they have achieved impressive results on various vision tasks, their performance on chart-to-code generation remains suboptimal. This task requires MLLMs to generate executable code that can reproduce a given chart, demanding not only precise visual understanding but also accurate translation of visual elements into structured code. Directly prompting MLLMs to perform this complex task often yields unsatisfactory results. To address this challenge, we propose {ChartIR}, an iterative refinement method based on structured instruction. First, we distinguish two tasks: visual understanding and code translation. To accomplish the visual understanding component, we design two types of structured instructions: description and difference. The description instruction captures the visual elements of the reference chart, while the difference instruction characterizes the discrepancies between the reference chart and the generated chart. These instructions effectively transform visual features into language representations, thereby facilitating the subsequent code translation process. Second, we decompose the overall chart generation pipeline into two stages: initial code generation and iterative refinement, enabling progressive enhancement of the final output. Experimental results show that, compared to other method, our method achieves superior performance on both the open-source model Qwen2-VL and the closed-source model GPT-4o.
Abstract (translated)
近期,多模态大型语言模型(MLLM)由于其强大的视觉理解能力而引起了越来越多的研究关注。尽管这些模型在各种视觉任务上取得了令人印象深刻的结果,但在图表到代码生成任务上的表现仍然不尽如人意。该任务要求MLLM生成能够再现给定图表的可执行代码,不仅需要精确的视觉理解能力,还需要将视觉元素准确地转化为结构化代码。直接提示MLLM完成这一复杂任务通常会得到不令人满意的结果。 为了解决这一挑战,我们提出了一种基于结构化指令的迭代改进方法{ChartIR}。首先,我们将任务分为两部分:视觉理解和代码翻译。为了实现视觉理解组件,我们设计了两种类型的结构化指令:描述和差异。描述性指令捕捉参考图表中的视觉元素,而差异性指令则刻画参考图表与生成图表之间的不同之处。这些指令有效地将视觉特征转化为语言表示,从而为后续的代码转换过程铺平道路。 其次,我们将整体的图表生成管道分解成两个阶段:初始代码生成和迭代改进,从而逐步提升最终输出的质量。实验结果表明,与其他方法相比,我们的方法在开源模型Qwen2-VL和闭源模型GPT-4o上都取得了更优的表现。
URL
https://arxiv.org/abs/2506.14837