Abstract
Small object detection in aerial imagery presents significant challenges in computer vision due to the minimal data inherent in small-sized objects and their propensity to be obscured by larger objects and background noise. Traditional methods using transformer-based models often face limitations stemming from the lack of specialized databases, which adversely affect their performance with objects of varying orientations and scales. This underscores the need for more adaptable, lightweight models. In response, this paper introduces two innovative approaches that significantly enhance detection and segmentation capabilities for small aerial objects. Firstly, we explore the use of the SAHI framework on the newly introduced lightweight YOLO v9 architecture, which utilizes Programmable Gradient Information (PGI) to reduce the substantial information loss typically encountered in sequential feature extraction processes. The paper employs the Vision Mamba model, which incorporates position embeddings to facilitate precise location-aware visual understanding, combined with a novel bidirectional State Space Model (SSM) for effective visual context modeling. This State Space Model adeptly harnesses the linear complexity of CNNs and the global receptive field of Transformers, making it particularly effective in remote sensing image classification. Our experimental results demonstrate substantial improvements in detection accuracy and processing efficiency, validating the applicability of these approaches for real-time small object detection across diverse aerial scenarios. This paper also discusses how these methodologies could serve as foundational models for future advancements in aerial object recognition technologies. The source code will be made accessible here.
Abstract (translated)
小的目标检测在无人机图像中具有显著的计算机视觉挑战,因为小规模物体固有的少量数据以及它们倾向于被较大物体和背景噪声遮挡,传统使用Transformer-based模型的方法常常受到缺乏专业数据库的局限,这会影响其对不同方向和尺寸的目标的检测和分割效果。这凸显了需要更灵活、轻便的模型的需要。因此,本文介绍了两种创新方法,显著增强了小无人机目标检测和分割的能力。首先,我们探讨了在轻量级YOLO v9架构上使用SAHI框架的效果,该架构利用可编程梯度信息(PGI)来减少在序列特征提取过程中通常遇到的大量信息损失。本文采用Vision Mamba模型,该模型包含位置嵌入,以促进精确的位置感知视觉理解,结合了一种新颖的双向状态空间模型(SSM)进行有效的视觉上下文建模。这种状态空间模型巧妙地利用了CNN的线性复杂性和Transformer的全局接收场,使其在远红外图像分类中特别有效。我们的实验结果表明,这些方法在检测精度和处理效率方面都有显著的提高,验证了这些方法在各种无人机场景下的实时小目标检测的适用性。本文还讨论了这些方法如何成为未来无人机目标识别技术进步的基础模型。源代码将在此处公开。
URL
https://arxiv.org/abs/2405.01699