Abstract
Modern Machine Learning (ML) has significantly advanced various research fields, but the opaque nature of ML models hinders their adoption in several domains. Explainable AI (XAI) addresses this challenge by providing additional information to help users understand the internal decision-making process of ML models. In the field of neuroscience, enriching a ML model for brain decoding with attribution-based XAI techniques means being able to highlight which brain areas correlate with the task at hand, thus offering valuable insights to domain experts. In this paper, we analyze human and Computer Vision (CV) systems in parallel, training and explaining two ML models based respectively on functional Magnetic Resonance Imaging (fMRI) and movie frames. We do so by leveraging the "StudyForrest" dataset, which includes functional Magnetic Resonance Imaging (fMRI) scans of subjects watching the "Forrest Gump" movie, emotion annotations, and eye-tracking data. For human vision the ML task is to link fMRI data with emotional annotations, and the explanations highlight the brain regions strongly correlated with the label. On the other hand, for computer vision, the input data is movie frames, and the explanations are pixel-level heatmaps. We cross-analyzed our results, linking human attention (obtained through eye-tracking) with XAI saliency on CV models and brain region activations. We show how a parallel analysis of human and computer vision can provide useful information for both the neuroscience community (allocation theory) and the ML community (biological plausibility of convolutional models).
Abstract (translated)
现代机器学习(ML)已经在许多研究领域取得了显著的进步,但ML模型的不透明性使得它们在多个领域中难以采用。可解释人工智能(XAI)通过提供额外的信息来帮助用户理解ML模型的内部决策过程,从而解决了这个问题。在神经科学领域,使用基于归因的XAI技术对ML模型进行改进意味着能够突出显示与任务相关的脑区,为领域专家提供宝贵的见解。在本文中,我们并行分析人类和计算机视觉系统,分别基于功能磁共振成像(fMRI)和电影帧训练和解释两个ML模型。我们通过利用“StudyForrest”数据集,该数据集包括观看《福尔摩斯传》电影的fMRI扫描、情感注释和眼动数据,来实现这一目标。对于人类视觉,ML任务是将fMRI数据与情感注释联系起来,解释部分突显了与标签高度相关的脑区。另一方面,对于计算机视觉,输入数据是电影帧,解释是像素级别的热力图。我们进行了交叉分析,将人类注意力和XAI突显与计算机视觉模型的结果联系起来。我们证明了通过并行分析人类和计算机视觉,可以为神经科学界(分配理论)和机器学习社区(卷积模型的生物合理性)提供有用的信息。
URL
https://arxiv.org/abs/2408.00493