Abstract
For privacy and security concerns, the need to erase unwanted information from pre-trained vision models is becoming evident nowadays. In real-world scenarios, erasure requests originate at any time from both users and model owners. These requests usually form a sequence. Therefore, under such a setting, selective information is expected to be continuously removed from a pre-trained model while maintaining the rest. We define this problem as continual forgetting and identify two key challenges. (i) For unwanted knowledge, efficient and effective deleting is crucial. (ii) For remaining knowledge, the impact brought by the forgetting procedure should be minimal. To address them, we propose Group Sparse LoRA (GS-LoRA). Specifically, towards (i), we use LoRA modules to fine-tune the FFN layers in Transformer blocks for each forgetting task independently, and towards (ii), a simple group sparse regularization is adopted, enabling automatic selection of specific LoRA groups and zeroing out the others. GS-LoRA is effective, parameter-efficient, data-efficient, and easy to implement. We conduct extensive experiments on face recognition, object detection and image classification and demonstrate that GS-LoRA manages to forget specific classes with minimal impact on other classes. Codes will be released on \url{this https URL}.
Abstract (translated)
为了隐私和安全问题,现在需要从预训练视觉模型中清除不需要的信息变得越来越明显。在现实场景中,清除请求可能来自用户和模型所有者,通常会形成一个序列。因此,在这种情况下,我们期望在保持其余信息的同时,持续从预训练模型中移除特定的信息。我们将这个问题称为持续遗忘,并确定两个关键挑战。 (i) 对于不需要的知识,高效的删除至关重要。 (ii) 对于保留的知识,遗忘过程所带来的影响应最小化。 为了解决这些问题,我们提出了Group Sparse LoRA (GS-LoRA)。具体来说,对于(i),我们使用LoRA模块对每个遗忘任务独立微调Transformer块中的FFN层,而对于(ii),采用简单的组稀疏 regularization,使自动选择特定LoRA组并消除其他组。GS-LoRA有效、参数效率高、数据效率高,并且易于实现。我们在面部识别、目标检测和图像分类上进行广泛的实验,并证明GS-LoRA能够以最小的影响对待定类别的其他类别。代码发布在[这个链接](https:// this URL)。
URL
https://arxiv.org/abs/2403.11530