Abstract

This paper presents a system for online multi-face tracking in unconstrained videos. Different shooting angles, strong illumination changes, abrupt motion and face pose changes are the main characteristics of these videos. The proposed online multi-face tracking system combines deep convolutional neural network face detection, multiple instances of a tracker based on discriminative scale and space correlation filters, shot change detection, tracking failure detection, tracklet generation, and ResNet-based face identity label assignment. The system is tested on a dataset of YouTube music videos which is characterised by video sequences with great visual differences caused by face appearance variations (changes in pose, size, makeup, and illumination), and/or rapid camera motion. The results of the experiment expressed by MOTA, MOTP and IDS metrics are given and compared with state-of-the-art multi-target trackers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.