Paper Reading AI Learner

Multi-Scale Representations by Varying Window Attention for Semantic Segmentation

2024-04-25 12:35:27
Haotian Yan, Ming Wu, Chuang Zhang

Abstract

Multi-scale learning is central to semantic segmentation. We visualize the effective receptive field (ERF) of canonical multi-scale representations and point out two risks in learning them: scale inadequacy and field inactivation. A novel multi-scale learner, varying window attention (VWA), is presented to address these issues. VWA leverages the local window attention (LWA) and disentangles LWA into the query window and context window, allowing the context's scale to vary for the query to learn representations at multiple scales. However, varying the context to large-scale windows (enlarging ratio R) can significantly increase the memory footprint and computation cost (R^2 times larger than LWA). We propose a simple but professional re-scaling strategy to zero the extra induced cost without compromising performance. Consequently, VWA uses the same cost as LWA to overcome the receptive limitation of the local window. Furthermore, depending on VWA and employing various MLPs, we introduce a multi-scale decoder (MSD), VWFormer, to improve multi-scale representations for semantic segmentation. VWFormer achieves efficiency competitive with the most compute-friendly MSDs, like FPN and MLP decoder, but performs much better than any MSDs. For instance, using nearly half of UPerNet's computation, VWFormer outperforms it by 1.0%-2.5% mIoU on ADE20K. With little extra overhead, ~10G FLOPs, Mask2Former armed with VWFormer improves by 1.0%-1.3%.

Abstract (translated)

多尺度学习是语义分割的核心。我们通过可视化具有规范多尺度表示的有效感受野(ERF)来指出学习它们的两个风险:规模不足和场激活不足。我们提出了一种新的多尺度学习器(Multi-scale Learner, VWA)来解决这些问题。VWA利用局部窗口注意(LWA)并解耦LWA为查询窗口和上下文窗口,使得上下文的规模对查询可以在多个尺度上学习表示。然而,将上下文增加到大型窗口(扩大比率R)可以显著增加内存 footprint 和计算成本(R^2 比 LWA 更大)。我们提出了一种简单但专业重新缩放策略来抵消额外诱导成本,同时不牺牲性能。因此,VWA使用与LWA相同的成本来克服局部窗口的感知限制。此外,根据VWA和采用各种MLP,我们引入了多尺度解码器(MSD),VWFormer,以提高语义分割的多尺度表示。VWFormer在计算开销与最可计算的MSD(如FPN和MLP解码器)相当,但性能远优于任何MSD。例如,使用UPerNet近一半的计算开销,VWFormer在ADE20K上实现了与它1.0%-2.5%的mIoU的提高。在很大程度上不需要额外的开销,大约10G FLOPs,Mask2Former配备了VWFormer后,性能提高了1.0%-1.3%。

URL

https://arxiv.org/abs/2404.16573

PDF

https://arxiv.org/pdf/2404.16573.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot