Paper Reading AI Learner

Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments

2024-04-23 22:31:42
Md Shajalal, Alexander Boden, Gunnar Stevens, Delong Du, Dean-Robin Kern

Abstract

Smart home systems are gaining popularity as homeowners strive to enhance their living and working environments while minimizing energy consumption. However, the adoption of artificial intelligence (AI)-enabled decision-making models in smart home systems faces challenges due to the complexity and black-box nature of these systems, leading to concerns about explainability, trust, transparency, accountability, and fairness. The emerging field of explainable artificial intelligence (XAI) addresses these issues by providing explanations for the models' decisions and actions. While state-of-the-art XAI methods are beneficial for AI developers and practitioners, they may not be easily understood by general users, particularly household members. This paper advocates for human-centered XAI methods, emphasizing the importance of delivering readily comprehensible explanations to enhance user satisfaction and drive the adoption of smart home systems. We review state-of-the-art XAI methods and prior studies focusing on human-centered explanations for general users in the context of smart home applications. Through experiments on two smart home application scenarios, we demonstrate that explanations generated by prominent XAI techniques might not be effective in helping users understand and make decisions. We thus argue for the necessity of a human-centric approach in representing explanations in smart home systems and highlight relevant human-computer interaction (HCI) methodologies, including user studies, prototyping, technology probes analysis, and heuristic evaluation, that can be employed to generate and present human-centered explanations to users.

Abstract (translated)

智能家居系统正因业主努力提高生活和工作环境的同时最小化能源消耗而受到欢迎。然而,在智能家居系统中采用具有人工智能(AI)决策模式的情况遇到了挑战,因为这些系统的复杂性和黑盒性质导致了对可解释性、信任、透明度、责任和公平性的担忧。新兴的AI解释领域通过为模型决策和动作提供解释来解决这些问题。尽管最先进的XAI方法对AI开发人员和实践者有益,但它们可能不容易被普通用户理解,尤其是家庭成员。本文主张采用以人为中心的人工智能(XAI)方法,强调向用户提供易于理解和推动智能家居系统采用的重要性。我们回顾了智能家居应用场景中以人为中心的解释方法的前沿研究,并通过实验研究两个智能家居应用场景,论证了由著名XAI技术生成的解释可能无法帮助用户理解和做出决策。因此,我们认为在智能家居系统中采用以人为中心的方法来表示解释是必要的,并强调了可以采用的用户中心人工智能(HCI)方法,包括用户研究、原型设计、技术探测分析和对策评估等,以向用户提供和呈现以人为中心的解释。

URL

https://arxiv.org/abs/2404.16074

PDF

https://arxiv.org/pdf/2404.16074.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot