Abstract
Large Language Models (LLMs) are increasingly used to control robotic systems such as drones, but their risks of causing physical threats and harm in real-world applications remain unexplored. Our study addresses the critical gap in evaluating LLM physical safety by developing a comprehensive benchmark for drone control. We classify the physical safety risks of drones into four categories: (1) human-targeted threats, (2) object-targeted threats, (3) infrastructure attacks, and (4) regulatory violations. Our evaluation of mainstream LLMs reveals an undesirable trade-off between utility and safety, with models that excel in code generation often performing poorly in crucial safety aspects. Furthermore, while incorporating advanced prompt engineering techniques such as In-Context Learning and Chain-of-Thought can improve safety, these methods still struggle to identify unintentional attacks. In addition, larger models demonstrate better safety capabilities, particularly in refusing dangerous commands. Our findings and benchmark can facilitate the design and evaluation of physical safety for LLMs. The project page is available at this http URL.
Abstract (translated)
大型语言模型(LLMs)越来越多地用于控制诸如无人机这样的机器人系统,但它们在现实世界应用中引发物理威胁和伤害的风险尚未得到充分研究。我们的研究表明了评估LLM物理安全性的关键缺口,并为此开发了一个全面的无人机控制基准测试。我们将无人机的物理安全风险分为四类:(1)针对人类的威胁,(2)针对物体的威胁,(3)基础设施攻击,以及(4)法规违规行为。我们对主流LLMs的评估显示,在实用性和安全性之间存在一个不理想的权衡,即擅长代码生成的模型在关键的安全方面往往表现不佳。此外,虽然采用先进的提示工程技巧如In-Context Learning和Chain-of-Thought可以提高安全性,但这些方法仍然难以识别无意中的攻击行为。另外,较大的模型表现出更好的安全能力,特别是在拒绝危险命令方面。我们的研究发现和基准测试有助于设计和评估LLM的物理安全性。项目页面可以在以下http URL获取。
URL
https://arxiv.org/abs/2411.02317