Abstract
We present MM-Navigator, a GPT-4V-based agent for the smartphone graphical user interface (GUI) navigation task. MM-Navigator can interact with a smartphone screen as human users, and determine subsequent actions to fulfill given instructions. Our findings demonstrate that large multimodal models (LMMs), specifically GPT-4V, excel in zero-shot GUI navigation through its advanced screen interpretation, action reasoning, and precise action localization capabilities. We first benchmark MM-Navigator on our collected iOS screen dataset. According to human assessments, the system exhibited a 91\% accuracy rate in generating reasonable action descriptions and a 75\% accuracy rate in executing the correct actions for single-step instructions on iOS. Additionally, we evaluate the model on a subset of an Android screen navigation dataset, where the model outperforms previous GUI navigators in a zero-shot fashion. Our benchmark and detailed analyses aim to lay a robust groundwork for future research into the GUI navigation task. The project page is at this https URL.
Abstract (translated)
我们提出了MM-Navigator,一种基于GPT-4V的智能手机图形用户界面(GUI)导航任务代理。MM-Navigator可以以人类用户的方式与智能手机屏幕互动,并确定根据给定指令后续的操作。我们的研究结果表明,大型多模态模型(LMMs),特别是GPT-4V,通过其先进的屏幕解释、动作推理和精确的动作定位能力在零散射击GUI导航中表现出色。 我们首先在收集的iOS屏幕数据集上对MM-Navigator进行基准测试。根据人类评估,系统在生成合理的动作描述和执行单步指令的正确动作方面都表现出91\%的准确率。此外,我们在一组Android屏幕导航数据集上评估了该模型,发现模型在零散射击方式上优于之前的GUI导航器。 我们的基准和详细分析旨在为未来研究提供稳固的基础,深入了解GUI导航任务。项目页面位于https://www.xxx.com。
URL
https://arxiv.org/abs/2311.07562