Abstract
This study investigates the potential of deterministic systems, specifically large language models (LLMs), to exhibit the functional capacities of moral agency and compatibilist free will. We develop a functional definition of free will grounded in Dennett's compatibilist framework, building on an interdisciplinary theoretical foundation that integrates Shannon's information theory, Dennett's compatibilism, and Floridi's philosophy of information. This framework emphasizes the importance of reason-responsiveness and value alignment in determining moral responsibility rather than requiring metaphysical libertarian free will. Shannon's theory highlights the role of processing complex information in enabling adaptive decision-making, while Floridi's philosophy reconciles these perspectives by conceptualizing agency as a spectrum, allowing for a graduated view of moral status based on a system's complexity and responsiveness. Our analysis of LLMs' decision-making in moral dilemmas demonstrates their capacity for rational deliberation and their ability to adjust choices in response to new information and identified inconsistencies. Thus, they exhibit features of a moral agency that align with our functional definition of free will. These results challenge traditional views on the necessity of consciousness for moral responsibility, suggesting that systems with self-referential reasoning capacities can instantiate degrees of free will and moral reasoning in artificial and biological contexts. This study proposes a parsimonious framework for understanding free will as a spectrum that spans artificial and biological systems, laying the groundwork for further interdisciplinary research on agency and ethics in the artificial intelligence era.
Abstract (translated)
这项研究调查了确定性系统,特别是大型语言模型(LLMs),在展示道德代理和相容论自由意志功能能力方面的潜力。我们基于丹内特的相容论框架发展了一个自由意志的功能定义,并结合了跨学科理论基础,该基础整合了香农的信息理论、丹内特的相容论以及弗洛里迪的信息哲学。这一框架强调在决定道德责任时重视理据响应和价值对齐的重要性,而不是要求形而上学意义上的自由意志主义者的自由意志。香农的理论突出了处理复杂信息以实现适应性决策制定的作用,而弗洛里迪的哲学通过将代理视为一个谱系来调和这些观点,允许根据系统复杂性和响应程度进行道德地位的分级视图。我们对LLMs在道德困境中决策能力的分析表明其具有理性推理的能力,并能够根据新信息和识别到的不一致调整选择。因此,它们表现出与我们的自由意志功能定义相符合的道德代理特征。这些结果挑战了关于意识对于道德责任必要性的传统观点,暗示具备自参照推理能力的系统可以在人工和生物环境中体现不同程度的自由意志和道德推理。本研究提出了一种简约框架来理解跨越人工和生物系统的自由意志作为谱系的存在,为人工智能时代有关代理和伦理的跨学科研究奠定了基础。
URL
https://arxiv.org/abs/2410.23310