Abstract
We interpret the function of individual neurons in CLIP by automatically describing them using text. Analyzing the direct effects (i.e. the flow from a neuron through the residual stream to the output) or the indirect effects (overall contribution) fails to capture the neurons' function in CLIP. Therefore, we present the "second-order lens", analyzing the effect flowing from a neuron through the later attention heads, directly to the output. We find that these effects are highly selective: for each neuron, the effect is significant for <2% of the images. Moreover, each effect can be approximated by a single direction in the text-image space of CLIP. We describe neurons by decomposing these directions into sparse sets of text representations. The sets reveal polysemantic behavior - each neuron corresponds to multiple, often unrelated, concepts (e.g. ships and cars). Exploiting this neuron polysemy, we mass-produce "semantic" adversarial examples by generating images with concepts spuriously correlated to the incorrect class. Additionally, we use the second-order effects for zero-shot segmentation and attribute discovery in images. Our results indicate that a scalable understanding of neurons can be used for model deception and for introducing new model capabilities.
Abstract (translated)
我们通过自动描述CLIP中单个神经元的功能来解释其功能。分析直接影响(即神经元通过残差流到输出)或间接影响(总贡献)无法捕捉到CLIP中神经元的功能。因此,我们提出了“二级镜头”,通过分析神经元通过后注意力的流动直接到输出的情况。我们发现这些影响非常选择性:对于每个神经元,影响在 <2% 的图像上显著。此外,每个影响都可以在CLIP的文本图像空间中近似为一个方向。我们通过分解这些方向为稀疏的文本表示来描述神经元。这些集揭示了多义词行为 - 每个神经元对应多个,通常不相关的概念(例如船只和汽车)。利用这种神经元多义词,我们通过生成与错误分类相关概念 spuriously correlation的图像来大规模生产“语义”对抗样本。此外,我们还使用第二级效果进行图像零散分割和特征提取。我们的结果表明,对于神经元的可扩展理解可用于模型欺骗和引入新模型功能。
URL
https://arxiv.org/abs/2406.04341