Abstract
Wearing a mask is one of the important measures to prevent infectious diseases. However, it is difficult to detect people's mask-wearing situation in public places with high traffic flow. To address the above problem, this paper proposes a mask-wearing face detection model based on YOLOv5l. Firstly, Multi-Head Attentional Self-Convolution not only improves the convergence speed of the model but also enhances the accuracy of the model detection. Secondly, the introduction of Swin Transformer Block is able to extract more useful feature information, enhance the detection ability of small targets, and improve the overall accuracy of the model. Our designed I-CBAM module can improve target detection accuracy. In addition, using enhanced feature fusion enables the model to better adapt to object detection tasks of different scales. In the experimentation on the MASK dataset, the results show that the model proposed in this paper achieved a 1.1% improvement in mAP(0.5) and a 1.3% improvement in mAP(0.5:0.95) compared to the YOLOv5l model. Our proposed method significantly enhances the detection capability of mask-wearing.
Abstract (translated)
戴口罩是预防传染病的重要措施之一。然而,在高度人流量的公共场合很难检测到人们的口罩佩戴情况。为解决上述问题,本文提出了一种基于YOLOv5l的口罩佩戴面部检测模型。首先,Multi-Head Attentional Self-Convolution 不仅提高了模型的收敛速度,还增强了模型的检测精度。其次,引入Swin Transformer Block能够提取更丰富的特征信息,提高小目标检测能力,并提高整个模型的准确性。我们设计的I-CBAM模块可以提高目标检测精度。此外,使用增强特征融合可以使模型更好地适应不同规模的物体检测任务。在MASK数据集的实验中,结果表明,与YOLOv5l模型相比,本文提出的模型在mAP(0.5)和mAP(0.5:0.95)上分别实现了1.1%和1.3%的提高。我们提出的方法显著增强了戴口罩检测能力。
URL
https://arxiv.org/abs/2310.10245