Abstract

The past few years in the fields of Person Re-Identification (RE-ID) have seen attention mechanism receives enormous interest as it has superior performance in obtaining discriminative feature representations. However, a wide range of state-of-the-art RE-ID attention models only focus on one-dimensional attention design method, e.g. spatial attention and channels attention, hence the produced attention maps are neither detailed enough nor discriminative enough to capture complicated interactions of visual parts. Developing multi-scale attention mechanism for RE-ID, an under-studied approach, becomes a practicable method to overcome this deficiency. Toward this goal, we propose a Multiscale Omnibearing Attention Networks (MOAN) for RE-ID which is capable of utilizing the complex fusion information acquired from the multiscale attention mechanism with features being more representative. Specifically, MOAN takes full advantage of multi-sized convolution filters to obtain discriminative holistic and local feature maps, and adaptively conducts feature information augmentation by introducing an Omnibearing Attention (OA) module. Through the OA module, spatial attention and channel attention are integrated together in a unique way where they work in a complementary way. To sum up, MOAN not only inherits the merit of two kinds of attention mechanism but also performs well in extracting comprehensive feature information. Furthermore, taking into account the robustness of model performance, we formulate a Random Drop (RD) Function to facilitate training MOAN and further increase the diversity of training model for adaptation. Furthermore, to achieve end-to-end training, we utilize trainable parameters to take place of initial fixed parameters, and the model performance is experimentally promoted. Extensive experiments have been carried out on the four mainstream RE-ID datasets. As the result shows, our method with re-ranking achieves rank-1 accuracy of 92.29% on CUHK03-NP, 97.45% on Market-1501, 93.81% on DukeMTMC-reID and 81.53% on MSMT17-V2, outperforming the state-of-the-art methods and confirming the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call