Abstract
In order to handle the complex databases of acquired images in the security area, a robust and adaptive framework for Video Surveillance Data Mining as well as for multi-shot pedestrian (re)-identification is required. The pedestrian’s signature must be invariant and robust against the noise and uncontrolled variation. In this paper a new fast Gait-Appearance-based Multi-Scale Video Covariance (GAMS-ViCov) unsupervised approach was proposed to efficiently describe any image-sequence, on streaming or stored in the database, of a pedestrian into a compact and fixed size signature while exploiting the whole relevant spatiotemporal information. The proposed model is based on multi-scale features extracted from a novel data structure called ‘Two-Half-Video-Tree’ (THVT) which represents the pedestrians and allows discarding the uncontrolled variations. THVT can efficiently model the gait and appearance of the upper and lower parts of the person’s silhouette into trees of multi-scale features. THVT can thus model the video data to new structured forms through a fast algorithm. Furthermore, GAMS-ViCov approach can also be competitive as a technique of dynamic video summarization using k-means clustering to model the signatures extracted from the image-sequences of each person into a cluster center. For each person’s cluster, the image-sequence that its signature is nearest to the centroid is selected and stored as the key image-sequence of this person. The proposed approach was evaluated for the person (re)-identification with i-LIDS and PRID databases. The experimental results show that GAMS-ViCov outperforms the most of unsupervised approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Engineering Applications of Artificial Intelligence
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.