Abstract

With the popularization of surveillance cameras, public-safety related applications requiring the functionality of video-based person re-identification (Re-ID) thrive. Re-ID aims at accurately identifying a person-of-interest across video sequences from multiple cameras. Existing methods usually focus on either spatially salient regions, or temporal features among frames of fixed intervals (i.e., either short- or long-term temporal features), resulting in the under-utilization of neglected features and hence moderate identification accuracy. To achieve high Re-ID accuracy, we propose a novel framework termed Multi-granular Spatial–Temporal Network (MSTN), that facilitates full utilization of spatial–temporal features for video-based person Re-ID. Within MSTN, a Temporal Kernel Attention (TKA) module is proposed to adaptively capture both short- and long-term temporal relationships; a Feature Disentanglement Spatial Attention (FDSA) module is further proposed to mine spatially salient and subtle features. Extensive experiments on the MARS dataset demonstrate that MSTN can achieve high identification accuracy, exhibiting 86.1% in terms of mAP and 91.0% in terms of Rank-1, notably higher than state-of-the-art comparison schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call