Paper Reading AI Learner

Evaluating Saliency Explanations in NLP by Crowdsourcing

2024-05-17 13:27:45
Xiaotian Lu, Jiyi Li, Zhen Wan, Xiaofeng Lin, Koh Takeuchi, Hisashi Kashima

Abstract

Deep learning models have performed well on many NLP tasks. However, their internal mechanisms are typically difficult for humans to understand. The development of methods to explain models has become a key issue in the reliability of deep learning models in many important applications. Various saliency explanation methods, which give each feature of input a score proportional to the contribution of output, have been proposed to determine the part of the input which a model values most. Despite a considerable body of work on the evaluation of saliency methods, whether the results of various evaluation metrics agree with human cognition remains an open question. In this study, we propose a new human-based method to evaluate saliency methods in NLP by crowdsourcing. We recruited 800 crowd workers and empirically evaluated seven saliency methods on two datasets with the proposed method. We analyzed the performance of saliency methods, compared our results with existing automated evaluation methods, and identified notable differences between NLP and computer vision (CV) fields when using saliency methods. The instance-level data of our crowdsourced experiments and the code to reproduce the explanations are available at this https URL.

Abstract (translated)

深度学习模型在许多自然语言处理任务上表现良好。然而,它们内部的机制通常对人类来说很难理解。为了确保深度学习模型的可靠性,研究如何解释模型变得越来越重要。各种局部解释方法(为输入的每个特征分配一个与输出贡献成比例的分数)被提出,以确定模型最看重输入的哪个部分。尽管在评估局部解释方法方面已经进行了大量工作,但不同评估指标的结果是否与人类认知相一致仍然是一个未解决的问题。在本文中,我们提出了一个基于人群的方法来评估自然语言处理中的局部解释方法。我们招募了800名人群工作者,在两个数据集上采用所提出的方法对七种局部解释方法进行了实证评估。我们分析了局部解释方法的表现,将我们的结果与现有的自动化评估方法进行了比较,并指出了在自然语言处理和计算机视觉(CV)领域使用局部解释方法时的一些显著差异。我们的人群实验的实例级数据和用于重现解释的代码都可以在以下链接找到:https://www.academia.edu/39411041/CrowdSourced_Local_Explanations_for_Natural_Language_Processing

URL

https://arxiv.org/abs/2405.10767

PDF

https://arxiv.org/pdf/2405.10767.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot