Abstract

This study aims to improve the YOLOV5 (You Only Look Once) model for real-time mask detection in complex scenarios, such as multiple objects, unclear features, and obstructions. The experiment builds upon the initial model by refining its input structure and incorporating the mosaic data augmentation technique. Additionally, it employs Spatial Pyramid Pooling Fusion (SPPF) to amalgamate local and global features at the featherMap level. The detection process is further enhanced by optimizing the overlap computation between detection frames and target frames. This refinement uses the Generalized Intersection over Union (GioU) loss function to enhance target accuracy. The system is built in PyCharm, and the dataset consists of 6005 images with masked and unmasked faces for training. The training set and test set maintain an 8:2 ratio. The enhanced facial feature extraction network efficiently detects mask-wearers in real-time, maintaining high recognition rates in crowded public spaces, supporting real-time virus transmission control in communal areas.The enhanced model achieved an impressive 92.9% recognition accuracy in this experiment, surpassing other detection models, highlighting its high quality and effectiveness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call