Abstract
In this paper, we present a novel fog-aware object detection network called FogGuard, designed to address the challenges posed by foggy weather conditions. Autonomous driving systems heavily rely on accurate object detection algorithms, but adverse weather conditions can significantly impact the reliability of deep neural networks (DNNs). Existing approaches fall into two main categories, 1) image enhancement such as IA-YOLO 2) domain adaptation based approaches. Image enhancement based techniques attempt to generate fog-free image. However, retrieving a fogless image from a foggy image is a much harder problem than detecting objects in a foggy image. Domain-adaptation based approaches, on the other hand, do not make use of labelled datasets in the target domain. Both categories of approaches are attempting to solve a harder version of the problem. Our approach builds over fine-tuning on the Our framework is specifically designed to compensate for foggy conditions present in the scene, ensuring robust performance even. We adopt YOLOv3 as the baseline object detection algorithm and introduce a novel Teacher-Student Perceptual loss, to high accuracy object detection in foggy images. Through extensive evaluations on common datasets such as PASCAL VOC and RTTS, we demonstrate the improvement in performance achieved by our network. We demonstrate that FogGuard achieves 69.43\% mAP, as compared to 57.78\% for YOLOv3 on the RTTS dataset. Furthermore, we show that while our training method increases time complexity, it does not introduce any additional overhead during inference compared to the regular YOLO network.
Abstract (translated)
在本文中,我们提出了一个名为FogGuard的新 fog-aware 物体检测网络,旨在解决雾天气条件下的挑战。自动驾驶系统高度依赖准确的物体检测算法,但恶劣天气条件会显著影响深度神经网络(DNNs)的可靠性。现有的方法可以分为两个主要的类别,1)图像增强,如IA-YOLO 2)基于域的适应方法。基于图像增强的技术试图生成雾中的无雾图像。然而,从雾中图像中检索无雾图像是一个比在雾中检测物体更困难的问题。基于域的适应方法,另一方面,没有使用目标领域内的标记数据。两类方法都在尝试解决一个更难的问题。我们的方法在FogGuard框架上进行了微调,专门针对场景中存在的雾状天气条件,确保 even的性能。我们采用 YOLOv3 作为基线物体检测算法,并引入了一种新的教师-学生感知损失,以实现对雾中图像的高精度物体检测。通过对PASCAL VOC和RTTS等常见数据集的广泛评估,我们证明了我们的网络在性能上的改善。我们证明了FogGuard实现了69.43\%的mAP,而YOLOv3在RTTS数据集上的值为57.78\%。此外,我们还证明了,尽管我们的训练方法增加了运行时复杂性,但在推理过程中并没有引入任何额外的开销,与常规的YOLO网络相比。
URL
https://arxiv.org/abs/2403.08939