Abstract
Legal autonomy - the lawful activity of artificial intelligence agents - can be achieved in one of two ways. It can be achieved either by imposing constraints on AI actors such as developers, deployers and users, and on AI resources such as data, or by imposing constraints on the range and scope of the impact that AI agents can have on the environment. The latter approach involves encoding extant rules concerning AI driven devices into the software of AI agents controlling those devices (e.g., encoding rules about limitations on zones of operations into the agent software of an autonomous drone device). This is a challenge since the effectivity of such an approach requires a method of extracting, loading, transforming and computing legal information that would be both explainable and legally interoperable, and that would enable AI agents to reason about the law. In this paper, we sketch a proof of principle for such a method using large language models (LLMs), expert legal systems known as legal decision paths, and Bayesian networks. We then show how the proposed method could be applied to extant regulation in matters of autonomous cars, such as the California Vehicle Code.
Abstract (translated)
法律自主权 - 人工智能代理的合法活动 - 可以通过两种方式实现。这可以通过对人工智能代理开发者、部署者和用户施加限制,以及对人工智能资源如数据施加限制来实现。后一种方法涉及将关于人工智能驱动设备的现有规则编码到控制这些设备的AI代理软件中(例如,将操作区域限制规则编码到自主无人机设备的代理软件中)。这种方法具有挑战性,因为实现这种方法需要一种提取、加载、转换和计算法律信息的可解释且具有法律可互操作性的方法,这将使AI代理能够推理法律。在本文中,我们用大规模语言模型(LLMs)、著名的法律决策路径(专家法律系统)和贝叶斯网络来阐述这种方法的一个原则。然后,我们展示了如何将该方法应用于自动驾驶汽车领域的现有法规,例如加州车辆代码。
URL
https://arxiv.org/abs/2403.18537