Paper Reading AI Learner

IMEX-Reg: Implicit-Explicit Regularization in the Function Space for Continual Learning

2024-04-28 12:25:09
Prashant Bhat, Bharath Renjith, Elahe Arani, Bahram Zonooz

Abstract

Continual learning (CL) remains one of the long-standing challenges for deep neural networks due to catastrophic forgetting of previously acquired knowledge. Although rehearsal-based approaches have been fairly successful in mitigating catastrophic forgetting, they suffer from overfitting on buffered samples and prior information loss, hindering generalization under low-buffer regimes. Inspired by how humans learn using strong inductive biases, we propose IMEX-Reg to improve the generalization performance of experience rehearsal in CL under low buffer regimes. Specifically, we employ a two-pronged implicit-explicit regularization approach using contrastive representation learning (CRL) and consistency regularization. To further leverage the global relationship between representations learned using CRL, we propose a regularization strategy to guide the classifier toward the activation correlations in the unit hypersphere of the CRL. Our results show that IMEX-Reg significantly improves generalization performance and outperforms rehearsal-based approaches in several CL scenarios. It is also robust to natural and adversarial corruptions with less task-recency bias. Additionally, we provide theoretical insights to support our design decisions further.

Abstract (translated)

持续学习(CL)是深度神经网络中一个长期存在的挑战,因为之前学习的知识会因为梯度消失而丢失。尽管基于练习的方法在减轻梯度消失方面相当成功,但它们在缓冲样本和先验信息损失方面过于拟合,阻碍了在低缓冲 regime下的泛化能力。受到人类使用强归纳偏见学习的方式启发,我们提出了一种基于对比学习(CRL)的低缓冲时经验回放(IMEX-Reg)方法,以提高在低缓冲 regime 下 CL 的泛化性能。具体来说,我们采用对比表示学习(CRL)中的双峰隐式-显式正则化方法,并结合一致性正则化。为了更好地利用使用 CRL 学习到的表示之间的关系,我们提出了一种引导分类器朝 CRL 中单位球体激活关联的方向的规范化策略。我们的结果表明,IMEX-Reg 显著提高了泛化性能,在多个 CL 场景中超过了基于练习的方法。它还对于自然和对抗性污染具有较低的任务晚期偏见。此外,我们还提供了进一步支持我们设计决策的理论和实验洞察。

URL

https://arxiv.org/abs/2404.18161

PDF

https://arxiv.org/pdf/2404.18161.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot