Automating face mask detection in public areas is paramount to maintaining public health, especially in the context of the COVID-19 pandemic. Utilization of technologies such as deep learning and computer vision systems enables effective monitoring of mask compliance, thereby minimizing the risk of virus spread. Real-time detection helps in prompt intervention for and enforcement of the use of masks, thereby preventing potential outbreaks and ensuring compliance with public health guidelines. This method helps save human resources and makes the reinforcement of wearing masks in public areas consistent and objective. Automatic detection of face masks serves as a key tool for preventing the spread of contagious diseases, protecting public health, and creating a safer environment for every person. This study addresses the challenges of real-time face mask detection via drone surveillance in public spaces, with reference to three categories: wearing of mask, incorrect wearing of mask, and no mask. Addressing these challenges entails an efficient and robust object detection and recognition algorithm. This algorithm can deal with a crowd of multiple faces via a mobile camera carried by a mini drone, and performs real-time video processing. Accordingly, this study proposes a You Only Look Once (YOLO) based deep learning C-Mask model for real-time face mask detection and recognition via drone surveillance in public spaces. The C-Mask model aims to operate within a mini drone surveillance system and provide efficient and robust face mask detection. The C-Mask model performs preprocessing, feature extraction, feature generation, feature enhancement, feature selection, and multivariate classification tasks for each face mask detection cycle. The preprocessing task prepares the training and testing data in the form of images for further processing. The feature extraction task is performed using a Convolutional Neural Network (CNN). Moreover, Cross-Stage Partial (CSP) DarkNet53 is used to improve the feature extraction and to facilitate the model’s object detection ability. A data augmentation algorithm is used for feature generation to enhance the model’s training robustness. The feature enhancement task is performed by applying the Path Aggregation Network (PANet) and Spatial Pyramid Pooling Network (SPPNet) algorithms, which are deployed to enhance the extracted and generated features. The classification task is performed through multi-label classification, wherein each object in an image can belong to multiple classes simultaneously, and the network generates a grid of bounding boxes and corresponding confidence scores for each class. The YOLO-based C-Mask model testing is performed by experimenting with various face mask detection scenarios and with varying mask colors and types, to ensure the efficiency and robustness of the proposed model. The C-Mask model test results show that this model can correctly and effectively detect face masks in real-time video streams under various conditions with an overall accuracy of 92.20%, precision of 92.04, recall of 90.83%, and F1-score of 89.95%, for all the three classes. These high scores have been obtained despite mini drone mobility and camera orientation adjustment substantially affecting face mask detection performance.
Read full abstract