Abstract
We propose a method that allows to develop shared understanding between two agents for the purpose of performing a task that requires cooperation. Our method focuses on efficiently establishing successful task-oriented communication in an open multi-agent system, where the agents do not know anything about each other and can only communicate via grounded interaction. The method aims to assist researchers that work on human-machine interaction or scenarios that require a human-in-the-loop, by defining interaction restrictions and efficiency metrics. To that end, we point out the challenges and limitations of such a (diverse) setup, while also restrictions and requirements which aim to ensure that high task performance truthfully reflects the extent to which the agents correctly understand each other. Furthermore, we demonstrate a use-case where our method can be applied for the task of cooperative query answering. We design the experiments by modifying an established ontology alignment benchmark. In this example, the agents want to query each other, while representing different databases, defined in their own ontologies that contain different and incomplete knowledge. Grounded interaction here has the form of examples that consists of common instances, for which the agents are expected to have similar knowledge. Our experiments demonstrate successful communication establishment under the required restrictions, and compare different agent policies that aim to solve the task in an efficient manner.
Abstract (translated)
我们提出了一种方法,该方法旨在开发两个 agents 之间的共享理解,以完成需要合作的任务。我们的方法专注于高效地在开放多agent系统中建立任务导向的沟通,其中 agents 彼此不知道任何事情,只能通过扎实的交互进行通信。该方法旨在协助研究人员进行人机互动或需要人工介入的场景,通过定义交互限制和效率指标来协助。为此,我们指出了这种(多样化的)setup 的挑战和限制,同时还有旨在确保高任务表现真实反映 agents 正确理解彼此的限制和需求。此外,我们展示了一个用法例,该用法例可以应用我们的方法和合作查询回答任务。我们修改了已建立的本体匹配基准,在这个例子中,agents 想要互相查询,同时代表不同的数据库,定义在它们自己的本体中包含不同且不完整的知识。扎实的交互在这里以例子的形式出现,其中包含共同实例, agents 被认为拥有类似的知识。我们的实验证明了满足所需限制后成功的通信建立,并比较了旨在以高效方式解决任务的不同 agent 政策。
URL
https://arxiv.org/abs/2305.09349