Abstract
Wearing masks in a crowded environment can reduce the risk of infection; however, wearing nonstandard cloud does not have a good protective effect on the virus, which makes it necessary to monitor the wearing of masks in real time. You only look once (YOLO) series models are widely used in various edge devices. The existing YOLOv5s method meets the requirements of inference time, but it is slightly deficient in terms of accuracy due to its generality. Considering the characteristics of our driver medical mask dataset, a position insensitive loss which is cloud extract shared area feature in different categories and half deformable convolution net methods with cloud concern noteworthy features were introduced into YOLOv5s to improve accuracy, with an increase of 6.7% mean average in @.5 (mAP@.5) and 8.3% in mAP@.5:.95 for our dataset. To ensure that our method can be applied in a real scenario, TensorRT and CUDA were introduced to reduce the inference time in two edge devices (Jetson TX2 and Jetson Nano) and one desktop device, whose inference time was faster than that of previous methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.