Paper Reading AI Learner

Simple In-place Data Augmentation for Surveillance Object Detection

2024-04-17 10:20:16
Munkh-Erdene Otgonbold, Ganzorig Batnasan, Munkhjargal Gochoo

Abstract

Motivated by the need to improve model performance in traffic monitoring tasks with limited labeled samples, we propose a straightforward augmentation technique tailored for object detection datasets, specifically designed for stationary camera-based applications. Our approach focuses on placing objects in the same positions as the originals to ensure its effectiveness. By applying in-place augmentation on objects from the same camera input image, we address the challenge of overlapping with original and previously selected objects. Through extensive testing on two traffic monitoring datasets, we illustrate the efficacy of our augmentation strategy in improving model performance, particularly in scenarios with limited labeled samples and imbalanced class distributions. Notably, our method achieves comparable performance to models trained on the entire dataset while utilizing only 8.5 percent of the original data. Moreover, we report significant improvements, with mAP@.5 increasing from 0.4798 to 0.5025, and the mAP@.5:.95 rising from 0.29 to 0.3138 on the FishEye8K dataset. These results highlight the potential of our augmentation approach in enhancing object detection models for traffic monitoring applications.

Abstract (translated)

为了在有限标注样本的情况下提高交通监测任务的模型性能,我们提出了一个专门针对物体检测数据集的简单增强技术,尤其针对静止相机应用。我们的方法专注于将物体放置在原始位置相同的位置,以确保其有效性。通过在同一相机输入图像上的对象进行原地增强,我们解决了与原始和之前选择的对象重叠的挑战。在两个交通监测数据集上进行广泛的测试,我们证明了我们在增强策略上取得优异性能,特别是在有限标注样本和类别分布不均衡的场景中。值得注意的是,我们的方法在只使用原始数据的8.5%的情况下,实现了与整个数据集训练的模型相当的表现。此外,我们报道了显著的改进,其中mAP@.5从0.4798增加到0.5025,mAP@.5:.95从0.29增加到0.3138在FishEye8K数据集上。这些结果突出了我们在增强交通监测应用中的物体检测模型潜力。

URL

https://arxiv.org/abs/2404.11226

PDF

https://arxiv.org/pdf/2404.11226.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot