Abstract
As the manufacturing industry advances with sensor integration and automation, the opaque nature of deep learning models in machine learning poses a significant challenge for fault detection and diagnosis. And despite the related predictive insights Artificial Intelligence (AI) can deliver, advanced machine learning engines often remain a black box. This paper reviews the eXplainable AI (XAI) tools and techniques in this context. We explore various XAI methodologies, focusing on their role in making AI decision-making transparent, particularly in critical scenarios where humans are involved. We also discuss current limitations and potential future research that aims to balance explainability with model performance while improving trustworthiness in the context of AI applications for critical industrial use cases.
Abstract (translated)
随着传感器集成和自动化技术的进步,制造业的发展对机器学习模型的透明性提出了重大挑战,尤其是在关键场景中,例如涉及人类的情况下。本文回顾了在 this 背景下可用的可解释人工智能(XAI)工具和技术。我们探讨了各种 XAI 方法,重点关注其在使 AI 决策透明方面的作用,尤其是在关键场景中。我们还讨论了当前的局限性以及旨在平衡可解释性 with 模型性能,同时提高 AI 应用在关键工业用例中的可信度未来的研究。
URL
https://arxiv.org/abs/2404.11597