Abstract
The trustworthiness of AI applications has been the subject of recent research and is also addressed in the EU's recently adopted AI Regulation. The currently emerging foundation models in the field of text, speech and image processing offer completely new possibilities for developing AI applications. This whitepaper shows how the trustworthiness of an AI application developed with foundation models can be evaluated and ensured. For this purpose, the application-specific, risk-based approach for testing and ensuring the trustworthiness of AI applications, as developed in the 'AI Assessment Catalog - Guideline for Trustworthy Artificial Intelligence' by Fraunhofer IAIS, is transferred to the context of foundation models. Special consideration is given to the fact that specific risks of foundation models can have an impact on the AI application and must also be taken into account when checking trustworthiness. Chapter 1 of the white paper explains the fundamental relationship between foundation models and AI applications based on them in terms of trustworthiness. Chapter 2 provides an introduction to the technical construction of foundation models and Chapter 3 shows how AI applications can be developed based on them. Chapter 4 provides an overview of the resulting risks regarding trustworthiness. Chapter 5 shows which requirements for AI applications and foundation models are to be expected according to the draft of the European Union's AI Regulation and Chapter 6 finally shows the system and procedure for meeting trustworthiness requirements.
Abstract (translated)
人工智能应用程序的可靠性一直是最近的研究主题,同时也被欧盟最近通过的AI法规所涉及。在文本、语音和图像处理领域目前正在崛起的新基础模型为开发人工智能应用程序提供了全新的可能性。这份白皮书展示了使用基础模型开发的人工智能应用程序的可靠性如何评估和确保。为此,将“AI评估目录 - 可信人工智能指南”中提出的针对应用程序特定、基于风险的方法转移到基础模型的背景下。特别关注基础模型特定的风险对AI应用程序的影响,在检查可靠性时也必须予以考虑。白皮书第一章解释了基于基础模型的AI应用程序和它们之间基于信用的基本关系。第二章介绍了基础模型的技术构建,第三章展示了如何基于它们开发AI应用程序。第四章提供了关于可信度风险的概述。第五章说明了根据欧盟AI法规草案预期应满足的AI应用程序和基础模型的要求。第六章最后展示了满足可信度要求的具体系统和程序。
URL
https://arxiv.org/abs/2405.04937