Abstract
Neural encoding of artificial neural networks (ANNs) links their computational representations to brain responses, offering insights into how the brain processes information. Current studies mostly use linear encoding models for clarity, even though brain responses are often nonlinear. This has sparked interest in developing nonlinear encoding models that are still interpretable. To address this problem, we propose LinBridge, a learnable and flexible framework based on Jacobian analysis for interpreting nonlinear encoding models. LinBridge posits that the nonlinear mapping between ANN representations and neural responses can be factorized into a linear inherent component that approximates the complex nonlinear relationship, and a mapping bias that captures sample-selective nonlinearity. The Jacobian matrix, which reflects output change rates relative to input, enables the analysis of sample-selective mapping in nonlinear models. LinBridge employs a self-supervised learning strategy to extract both the linear inherent component and nonlinear mapping biases from the Jacobian matrices of the test set, allowing it to adapt effectively to various nonlinear encoding models. We validate the LinBridge framework in the scenario of neural visual encoding, using computational visual representations from CLIP-ViT to predict brain activity recorded via functional magnetic resonance imaging (fMRI). Our experimental results demonstrate that: 1) the linear inherent component extracted by LinBridge accurately reflects the complex mappings of nonlinear neural encoding models; 2) the sample-selective mapping bias elucidates the variability of nonlinearity across different levels of the visual processing hierarchy. This study presents a novel tool for interpreting nonlinear neural encoding models and offers fresh evidence about hierarchical nonlinearity distribution in the visual cortex.
Abstract (translated)
神经编码的人工神经网络(ANNs)将它们的计算表示与大脑反应联系起来,为了解大脑如何处理信息提供了洞见。当前的研究大多使用线性编码模型以保持清晰度,尽管大脑反应往往是非线性的。这激发了开发既非线性又可解释的编码模型的兴趣。为了应对这一问题,我们提出了LinBridge,一种基于雅可比分析的学习且灵活的框架,用于解释非线性编码模型。LinBridge假设ANN表示与神经反应之间的非线性映射可以分解为一个近似复杂非线性关系的线性固有成分和捕获样本选择性非线性的映射偏差。雅可比矩阵反映了相对于输入的输出变化率,使我们能够分析非线性模型中的样本选择性映射。LinBridge采用自监督学习策略从测试集的雅可比矩阵中提取线性固有成分和非线性映射偏差,使其能有效地适应各种非线性编码模型。我们在神经视觉编码场景中验证了LinBridge框架,使用来自CLIP-ViT的计算视觉表示来预测通过功能性磁共振成像(fMRI)记录的大脑活动。我们的实验结果表明:1) LinBridge提取的线性固有成分准确反映了非线性神经编码模型的复杂映射;2) 样本选择性映射偏差阐明了不同层次视知觉处理中的非线性变异。这项研究提出了一种解释非线性神经编码模型的新工具,并提供了关于视觉皮层中分层非线性分布的新证据。
URL
https://arxiv.org/abs/2410.20053