Abstract
A compressed domain generic object tracking algorithm offers, in combination with a face detection algorithm, a low-computational-cost solution to the problem of detecting and locating faces in frames of compressed video sequences (such as MPEG-1 or MPEG-2). Objects such as faces can thus be tracked through a compressed video stream using motion information provided by existing forward and backward motion vectors. The described solution requires only low computational resources on CE devices and offers at one and the same time sufficiently good location rates.
Highlights
The problem of tracking objects over time is a complex one in computer vision and has been an important topic of research over the last few years
The type of information that is available varies from frame to frame— for example MPEG-1 or MPEG-2 video sequences are typically comprised of I, P, and B-frames, each with its own set of parameters
In this paper we describe an object tracking solution that uses only compressed parameters available in MPEG-1 or MPEG-2 video sequences while performing only the minimal decoding necessary to retrieve them from the compressed video streams
Summary
The problem of tracking objects over time is a complex one in computer vision and has been an important topic of research over the last few years Such importance comes from the fact that object tracking enables important applications in areas such as security and surveillance (e.g., tracking people in restricted areas using security cameras), content management (e.g., in video abstraction to automatically annotate video content), content improvement (e.g., helping stabilize images in handheld mobile videophones by tracking the location of faces), human-machine interface (e.g., to automatically recognize hand gestures to automatically execute commands), interactive gaming, and so forth. The organization of this paper is as follows: the section describes the face tracking algorithm; Section 3 evaluates the algorithm’s computational complexity and the algorithm’s performance; and the paper concludes with Section 4
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.