Abstract
The quality of visual feature representation has always been a key factor in many computer vision tasks. In the person re-identification (Re-ID) problem, combining global and local features to improve model performance is becoming a popular method, because previous works only used global features alone, which is very limited at extracting discriminative local patterns from the obtained representation. Some existing works try to collect local patterns explicitly slice the global feature into several local pieces in a handcrafted way. By adopting the slicing and duplication operation, models can achieve relatively higher accuracy but we argue that it still does not take full advantage of partial patterns because the rule and strategy local slices are defined. In this paper, we show that by firstly over-segmenting the global region by the proposed multi-branch structure, and then by learning to combine local features from neighbourhood regions using the proposed Collaborative Attention Network (CAN), the final feature representation for Re-ID can be further improved. The experiment results on several widely-used public datasets prove that our method outperforms many existing state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.