Paper Reading AI Learner

LR-FPN: Enhancing Remote Sensing Object Detection with Location Refined Feature Pyramid Network

2024-04-02 03:36:07
Hanqian Li, Ruinan Zhang, Ye Pan, Junchi Ren, Fei Shen

Abstract

Remote sensing target detection aims to identify and locate critical targets within remote sensing images, finding extensive applications in agriculture and urban planning. Feature pyramid networks (FPNs) are commonly used to extract multi-scale features. However, existing FPNs often overlook extracting low-level positional information and fine-grained context interaction. To address this, we propose a novel location refined feature pyramid network (LR-FPN) to enhance the extraction of shallow positional information and facilitate fine-grained context interaction. The LR-FPN consists of two primary modules: the shallow position information extraction module (SPIEM) and the contextual interaction module (CIM). Specifically, SPIEM first maximizes the retention of solid location information of the target by simultaneously extracting positional and saliency information from the low-level feature map. Subsequently, CIM injects this robust location information into different layers of the original FPN through spatial and channel interaction, explicitly enhancing the object area. Moreover, in spatial interaction, we introduce a simple local and non-local interaction strategy to learn and retain the saliency information of the object. Lastly, the LR-FPN can be readily integrated into common object detection frameworks to improve performance significantly. Extensive experiments on two large-scale remote sensing datasets (i.e., DOTAV1.0 and HRSC2016) demonstrate that the proposed LR-FPN is superior to state-of-the-art object detection approaches. Our code and models will be publicly available.

Abstract (translated)

遥感目标检测旨在在遥感图像中识别和定位关键目标,在农业和城市规划等领域具有广泛应用。常用的特征金字塔网络(FPN)通常用于提取多尺度特征。然而,现有的FPN往往忽视提取低级位置信息和高精度上下文交互。为了解决这个问题,我们提出了一个新颖的定位优化特征金字塔网络(LR-FPN),以增强浅层位置信息的提取和促进高精度上下文交互。LR-FPN由两个主要模块组成:浅层位置信息提取模块(SPIEM)和上下文交互模块(CIM)。具体来说,SPIEM首先通过同时提取低级特征图的定位和轮廓信息来最大化目标的保留位置信息。然后,CIM通过空间和通道交互将这种稳健的位置信息注入原始FPN的不同层中,明显增强对象的面积。此外,在空间交互中,我们引入了一种简单的地方和非地方交互策略,用于学习和保留对象的轮廓信息。最后,LR-FPN可以轻松地集成到常见的物体检测框架中,显著提高性能。在两个大型遥感数据集(即DOTAV1.0和HRSC2016)上的大量实验证明,与最先进的物体检测方法相比,LR-FPN具有优越性。我们的代码和模型将公开发布。

URL

https://arxiv.org/abs/2404.01614

PDF

https://arxiv.org/pdf/2404.01614.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot