Abstract
Video surveillance extensively uses person detection and tracking technology based on video. The majority of person detection and classification techniques currently in use encounter challenges in video sequences brought on by occlusion, ambient lighting, and variations in human facial position. This paper proposed an effective person identification and classification system based on deep learning, which comprises a you only look once at version 8 (YOLOv8) detection and classification model, to classify human faces in video sequences accurately. This work proposes a new staff-detection and classification (S-DEC) dataset for comprehensive performance evaluation. visual tracker benchmark (VTB) standard database is used for performance comparison with the proposed S-DEC dataset. The proposed technique achieved 98.67% precision accuracy. For the S-DEC dataset, the system gave 94.67% accuracy in identifying facial images from a video sequence of 38 people addressing the pose variation occlusion challenge. Earlier methods used to provide approximately 85% to 90% results taking more execution time. Many existing techniques were successful in detecting people only-identification of the detected person has been done in limited papers. The proposed method uses the cross-stage partial connections (CSPDarknet53) model, integrated with YOLOv8, to achieve faster results. The proposed framework took 35 minutes to train a deep learning model. A testing time of 2 minutes ensured that the proposed framework outplayed other existing methodologies and successfully identified extra information about the detected person.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Electrical and Computer Engineering (IJECE)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.