Abstract

Generally, face detection and tracking focus only on visual data analysis. In this paper, we propose a novel method for face tracking in camera video. By making use of the context metadata captured by wearable sensors on human bodies at the time of video recording, we could improve the performance and efficiency of traditional face tracking algorithms. Specifically, when subjects wearing motion sensors move around in the field of view (FOV) of a camera, motion features collected by those sensors help to locate frames most probably containing faces from the recorded video and thus save large amount of time spent on filtering out faceless frames and cut down the proportion of false alarms. We conduct extensive experiments to evaluate the proposed method and achieve promising results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call