Abstract

Text-based person search aims to use text descriptions to search for corresponding person images. However, due to the obvious pattern differences in image and text modalities, it is still a challenging problem to align the two modalities. Most existing approaches only consider semantic alignment within a global context or partial parts, lacking consideration of how to match image and text in terms of differences in model information. Therefore, in this paper, we propose an efficient Modality-Aligned Person Search network (MAPS) to address this problem. First, we suppress image-specific information by image feature style normalization to achieve modality knowledge alignment and reduce information differences between text and image. Second, we design a multi-granularity modal feature fusion and optimization method to enrich the modal features. To address the problem of useless and redundant information in the multi-granularity fused features, we propose a Multi-granularity Feature Self-optimization Module (MFSM) to adaptively adjust the corresponding contributions of different granularities in the fused features of the two modalities. Finally, to address the problem of information inconsistency in the training and inference stages, we propose a Cross-instance Feature Alignment (CFA) to help the network enhance category-level generalization ability and improve retrieval performance. Extensive experiments demonstrate that our MAPS achieves state-of-the-art performance on all text-based person search datasets, and significantly outperforms other existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call