Abstract

As a challenging computer vision task, video-based person Re-IDentification (Re-ID) has been intensively studied, and recent works have achieved a series of satisfactory results by capturing spatial temporal relationships. However, extensive observations have found that the same feature vector generated by a convolutional neural network contains considerable redundant information in the channel dimension. This issue is seldom investigated. A Spatial Temporal and Channel Aware Network (STCAN) for video-based ReID is studied in this paper. It jointly considers spatial temporal and channel information. Firstly, the Spatial Attention Enhanced (SAE) convolutional network is developed as the backbone network to learn spatial enhanced features from video frames. Secondly, a Channel Segmentation and Group Shuffle (CSGS) convolution module is designed to jointly address temporal and channel relations. Finally, a Two Branch Weighted Fusion (TBWF) mechanism is introduced to enhance the robustness of the Re-ID network by fusing the output of the SAE backbone network and CSGS. Comprehensive experiments are conducted on three large-scale datasets MARS, LSVID, and P-DESTRE. The experimental results imply that the STCAN can effectively improve the performance of video-based Re-ID and outperform several state-of-the-art methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.