Abstract
This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. Additionally, our analysis uncovers and discusses critical gaps in the current framework of the NIST AI RMF, particularly concerning its application to surveillance technologies. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF.
Abstract (translated)
这项研究深入探讨了美国国家标准和技术研究院(NIST)的人工智能风险管理框架(NIST AI RMF)在监视技术领域的应用和影响,特别是面部识别技术。鉴于面部识别系统固有的高风险和严重性,我们的研究强调了该领域需要结构化风险管理方法的关键性。论文提出了一個詳細的案例研究,展示了NIST AI RMF在发现和减轻这些技术中可能被忽视的风险方面的实用性。我们的主要目标是为实现 Responsible AI Utilization 的实际、可扩展性方法制定全面的风险管理策略。我们提出了一个针对监视技术特定挑战的六步骤过程,旨在促进公司更有效地管理AI相关风险,并确保AI系统的伦理和负责任部署。此外,我们的分析揭示了当前NIST AI RMF框架中关键的不足之处,特别是其对监视技术应用的不足。这些见解为人工智能治理和风险管理的演变对话提供了重要的洞察,强调了对像NIST AI RMF这样的框架未来改进和发展的领域。
URL
https://arxiv.org/abs/2403.15646