Paper Reading AI Learner

Learning Optimal Features via Partial Invariance

2023-01-28 02:48:14
Moulik Choraria, Ibtihal Ferwana, Ankur Mani, Lav R. Varshney

Abstract

Learning models that are robust to test-time distribution shifts is a key concern in domain generalization, and in the wider context of their real-life applicability. Invariant Risk Minimization (IRM) is one particular framework that aims to learn deep invariant features from multiple domains and has subsequently led to further variants. A key assumption for the success of these methods requires that the underlying causal mechanisms/features remain invariant across domains and the true invariant features be sufficient to learn the optimal predictor. In practical problem settings, these assumptions are often not satisfied, which leads to IRM learning a sub-optimal predictor for that task. In this work, we propose the notion of partial invariance as a relaxation of the IRM framework. Under our problem setting, we first highlight the sub-optimality of the IRM solution. We then demonstrate how partitioning the training domains, assuming access to some meta-information about the domains, can help improve the performance of invariant models via partial invariance. Finally, we conduct several experiments, both in linear settings as well as with classification tasks in language and images with deep models, which verify our conclusions.

Abstract (translated)

能够在测试时分布 Shift 稳定的学习模型是域泛化的关键关注点,以及其实际应用场景的更广泛的背景下。不变风险最小化(IRM)是一个特定框架,旨在从多个域中学习深度不变的特征,并随后导致了进一步的变化。这些方法的成功的一个关键假设是:不同域之间的基本因果关系机制/特征是不变的,真正的不变特征足够学习最佳的预测器。在 practical 问题设置中,这些假设往往无法满足,这导致 IRM 学习该任务的次优预测器。在本工作中,我们提出了 partial invariance 作为 IRM 框架的放松。在我们的问题设置中,我们首先突出了 IRM 解决方案的次优性。然后我们演示了如何分区训练域,假设获得了有关域的某种 meta-信息,通过 partial invariance 来改善不变模型的性能。最后,我们进行了 several 实验,包括线性设置和使用深度模型的语言和图像分类任务,这些实验证实了我们的结论。

URL

https://arxiv.org/abs/2301.12067

PDF

https://arxiv.org/pdf/2301.12067.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot