Abstract

Automatic baggage detection has become a subject of significant practical interest in recent years. In this paper, we propose an approach to baggage detection in CCTV video footage that uses color information to address some of the vital shortcomings of state-of-the-art algorithms. The proposed approach consists of typical steps used in baggage detection, namely, the estimation of moving direction of humans carrying baggage, construction of human-like temporal templates, and their alignment with the best matched view-specific exemplars. In addition, we utilize the color information to define the region that most likely belongs to a human torso in order to reduce the false positive detections. A key novel contribution is the person's viewing direction estimation using machine learning and shoulder shape related features. Further enhancement of baggage detection and segmentation is achieved by exploiting the CIELAB color space properties. The proposed system has been extensively tested for its effectiveness, at each stage of improvement, on PETS 2006 dataset and additional CCTV video footage captured to cover specific test scenarios. The experimental results suggest that the proposed algorithm is capable of superseding the functional performance of state-of-the-art baggage detection algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.