Abstract
How does the internal computation of a machine learning model transform inputs into predictions? In this paper, we introduce a task called component modeling that aims to address this question. The goal of component modeling is to decompose an ML model's prediction in terms of its components -- simple functions (e.g., convolution filters, attention heads) that are the "building blocks" of model computation. We focus on a special case of this task, component attribution, where the goal is to estimate the counterfactual impact of individual components on a given prediction. We then present COAR, a scalable algorithm for estimating component attributions; we demonstrate its effectiveness across models, datasets, and modalities. Finally, we show that component attributions estimated with COAR directly enable model editing across five tasks, namely: fixing model errors, ``forgetting'' specific classes, boosting subpopulation robustness, localizing backdoor attacks, and improving robustness to typographic attacks. We provide code for COAR at this https URL .
Abstract (translated)
机器学习模型如何将输入转换为预测?在本文中,我们引入了一个名为组件建模的任务,旨在回答这个问题。组件建模的目标是分解一个机器学习模型的预测,将其分解为其组件——模型计算中的简单函数(例如卷积滤波器、注意头)作为“构建块”。我们关注组件归因任务的特殊情况,即估计单个组件对给定预测的逆事实影响。然后,我们介绍了COAR,一种可扩展的算法,用于估计组件归因;我们证明了其在模型、数据集和维度上的有效性。最后,我们证明了使用COAR估计的组件归因可以直接使模型在五个任务上进行编辑:修复模型错误、忘记特定类、增强亚集的鲁棒性、定位后门攻击和提高对印刷攻击的鲁棒性。我们在这个链接处提供了COAR的代码:https://www.acm.org/dl/2022.0221000 。
URL
https://arxiv.org/abs/2404.11534