Abstract
Digital agents are increasingly employed to automate tasks in interactive digital environments such as web pages, software applications, and operating systems. While text-based agents built on Large Language Models (LLMs) often require frequent updates due to platform-specific APIs, visual agents leveraging Multimodal Large Language Models (MLLMs) offer enhanced adaptability by interacting directly with Graphical User Interfaces (GUIs). However, these agents face significant challenges in visual perception, particularly when handling high-resolution, visually complex digital environments. This paper introduces Iris, a foundational visual agent that addresses these challenges through two key innovations: Information-Sensitive Cropping (ISC) and Self-Refining Dual Learning (SRDL). ISC dynamically identifies and prioritizes visually dense regions using a edge detection algorithm, enabling efficient processing by allocating more computational resources to areas with higher information density. SRDL enhances the agent's ability to handle complex tasks by leveraging a dual-learning loop, where improvements in referring (describing UI elements) reinforce grounding (locating elements) and vice versa, all without requiring additional annotated data. Empirical evaluations demonstrate that Iris achieves state-of-the-art performance across multiple benchmarks with only 850K GUI annotations, outperforming methods using 10x more training data. These improvements further translate to significant gains in both web and OS agent downstream tasks.
Abstract (translated)
数字代理越来越多地被用于自动化交互式数字环境中的任务,如网页、软件应用程序和操作系统。虽然基于大型语言模型(LLMs)的文本型代理通常需要频繁更新以适应平台特定的API,但利用多模态大型语言模型(MLLMs)的视觉代理通过直接与图形用户界面(GUIs)交互提供了增强的适应性。然而,这些代理在视觉感知方面面临着重大挑战,尤其是在处理高分辨率和视觉复杂的数字环境时。本文介绍了Iris,这是一种基础性的视觉代理,它通过两项关键创新来解决这些问题:信息敏感裁剪(ISC)和自我精炼双重学习(SRDL)。ISC使用边缘检测算法动态识别并优先处理视觉密集区域,通过对信息密度较高的区域分配更多的计算资源以实现高效处理。SRDL利用一个双循环学习过程增强代理处理复杂任务的能力,在这个过程中,指代改进(描述UI元素)会加强定位(找到这些元素),反之亦然,并且无需额外的标注数据。实证评估表明,Iris仅使用85万GUI注释就实现了跨多个基准测试的最佳性能,超过了使用多10倍训练数据的方法的表现。这些改进进一步转化为网页和操作系统代理下游任务的重要提升。
URL
https://arxiv.org/abs/2412.10342