Abstract
The proliferation of applications using artificial intelligence (AI) systems has led to a growing number of users interacting with these systems through sophisticated interfaces. Human-computer interaction research has long shown that interfaces shape both user behavior and user perception of technical capabilities and risks. Yet, practitioners and researchers evaluating the social and ethical risks of AI systems tend to overlook the impact of anthropomorphic, deceptive, and immersive interfaces on human-AI interactions. Here, we argue that design features of interfaces with adaptive AI systems can have cascading impacts, driven by feedback loops, which extend beyond those previously considered. We first conduct a scoping review of AI interface designs and their negative impact to extract salient themes of potentially harmful design patterns in AI interfaces. Then, we propose Design-Enhanced Control of AI systems (DECAI), a conceptual model to structure and facilitate impact assessments of AI interface designs. DECAI draws on principles from control systems theory -- a theory for the analysis and design of dynamic physical systems -- to dissect the role of the interface in human-AI systems. Through two case studies on recommendation systems and conversational language model systems, we show how DECAI can be used to evaluate AI interface designs.
Abstract (translated)
人工智能(AI)系统的应用程序的普及导致越来越多的用户通过复杂的界面与这些系统互动。人机交互研究已经证明,界面不仅塑造了用户的行為,还塑造了用户对技术能力和风险的认知。然而,评估AI系统的社会和道德风险的实践者和研究人员往往忽视了类人形、欺骗性和沉浸式界面对人类-AI互动的影响。在这里,我们认为具有自适应AI系统的界面设计特征可能会产生级联影响,这种影响超越了之前考虑的范围。我们首先对AI界面设计进行了范围审查,以提取可能对AI界面产生有害设计模式的主题。然后,我们提出了设计增强控制AI系统(DECAI)的概念模型,用于结构和促进AI界面设计的影響評估。DECAI借鉴了控制理论--一种用于分析和管理动态物理系统的理论--来阐明界面在人类-AI系统中的作用。通过推荐系统和会话语言模型系统的两个案例研究,我们展示了如何使用DECAI评估AI界面设计。
URL
https://arxiv.org/abs/2404.11370