Paper Reading AI Learner

Low-Light Image Enhancement Framework for Improved Object Detection in Fisheye Lens Datasets

2024-04-15 18:32:52
Dai Quoc Tran, Armstrong Aboah, Yuntae Jeon, Maged Shoman, Minsoo Park, Seunghee Park

Abstract

This study addresses the evolving challenges in urban traffic monitoring detection systems based on fisheye lens cameras by proposing a framework that improves the efficacy and accuracy of these systems. In the context of urban infrastructure and transportation management, advanced traffic monitoring systems have become critical for managing the complexities of urbanization and increasing vehicle density. Traditional monitoring methods, which rely on static cameras with narrow fields of view, are ineffective in dynamic urban environments, necessitating the installation of multiple cameras, which raises costs. Fisheye lenses, which were recently introduced, provide wide and omnidirectional coverage in a single frame, making them a transformative solution. However, issues such as distorted views and blurriness arise, preventing accurate object detection on these images. Motivated by these challenges, this study proposes a novel approach that combines a ransformer-based image enhancement framework and ensemble learning technique to address these challenges and improve traffic monitoring accuracy, making significant contributions to the future of intelligent traffic management systems. Our proposed methodological framework won 5th place in the 2024 AI City Challenge, Track 4, with an F1 score of 0.5965 on experimental validation data. The experimental results demonstrate the effectiveness, efficiency, and robustness of the proposed system. Our code is publicly available at this https URL.

Abstract (translated)

本研究针对基于鱼眼镜头摄像头的城市交通监测检测系统所面临的不断演变挑战,提出了一个框架来提高这些系统的有效性和准确性。在城市的基础设施和交通管理背景下,先进的交通监测系统对于管理城市化复杂性和增加车辆密度至关重要。传统监测方法,依赖静态摄像头,其视野狭窄,在动态城市环境中无效,需要安装多个摄像头,这会增加成本。鱼眼镜头,最近引入,在单帧中提供广泛和全向覆盖,使得它们成为变革性的解决方案。然而,像扭曲和模糊这样的问题出现,使得这些图像上的准确物体检测效果受限。为了应对这些挑战,本研究提出了一个结合基于Transformer的图像增强框架和集成学习技术的新方法,以解决这些问题并提高交通监测准确性,对智能交通管理系统的发展做出了重要贡献。我们提出的方法论框架在2024 AI City Challenge Track 4中获得了第五名,实验验证数据中的F1分数为0.5965。实验结果证明了所提出系统的有效性、效率和稳健性。我们的代码公开在https://这个URL上。

URL

https://arxiv.org/abs/2404.10078

PDF

https://arxiv.org/pdf/2404.10078.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot