Paper Reading AI Learner

A separability-based approach to quantifying generalization: which layer is best?

2024-05-02 17:54:35
Luciano Dyballa, Evan Gerritz, Steven W. Zucker

Abstract

Generalization to unseen data remains poorly understood for deep learning classification and foundation models. How can one assess the ability of networks to adapt to new or extended versions of their input space in the spirit of few-shot learning, out-of-distribution generalization, and domain adaptation? Which layers of a network are likely to generalize best? We provide a new method for evaluating the capacity of networks to represent a sampled domain, regardless of whether the network has been trained on all classes in the domain. Our approach is the following: after fine-tuning state-of-the-art pre-trained models for visual classification on a particular domain, we assess their performance on data from related but distinct variations in that domain. Generalization power is quantified as a function of the latent embeddings of unseen data from intermediate layers for both unsupervised and supervised settings. Working throughout all stages of the network, we find that (i) high classification accuracy does not imply high generalizability; and (ii) deeper layers in a model do not always generalize the best, which has implications for pruning. Since the trends observed across datasets are largely consistent, we conclude that our approach reveals (a function of) the intrinsic capacity of the different layers of a model to generalize.

Abstract (translated)

推广到未见过的数据在深度学习和基础模型中仍然存在很大的不确定性。如何评估网络在面对新或扩展的输入空间时的适应能力,以及少样本学习、离散泛化和领域适应?网络的哪些层可能最具泛化能力?我们提出了一种评估网络对给定域的表示能力的方法,无论网络是否在域上进行过所有的类别的预训练。我们的方法如下:在特定域上对先进的预训练模型进行微调后,我们评估它们在相关但不同的域上的数据上的表现。泛化能力被量化为中间层未见过的数据的潜在表示的功能,无论是无监督还是监督设置。在整个网络的工作过程中,我们发现:(i)高分类准确率并不一定意味着高泛化能力;(ii)模型中的更深层并不总是泛化最好,这会对剪裁产生影响。由于数据集的趋势在很大程度上是一致的,我们得出结论,我们的方法揭示了模型不同层之间泛化的内在能力。

URL

https://arxiv.org/abs/2405.01524

PDF

https://arxiv.org/pdf/2405.01524.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot