Abstract

Large-scale fine-grained image retrieval based hashing learning method has two main problems. First, low dimension feature embedding can fasten the retrieval process but bring accuracy decrease due to much information loss. Second, fine-grained images lead to the same category query hash codes mapping into the different cluster in database hash latent space. To handle these issues, we propose a feature consistency driven attention erasing network (FCAENet) for fine-grained image retrieval. For the first issue, we propose an adaptive augmentation module in FCAENet, which is the selective region erasing module (SREM). SREM makes the network more robust on subtle differences of fine-grained task by adaptively covering some regions of raw images. The feature extractor and hash layer can learn more representative hash codes for fine-grained images by SREM. With regard to the second issue, we fully exploit the pair-wise similarity information and add the enhancing space relation loss (ESRL) in FCAENet to make the vulnerable relation stabler between the query hash code and database hash code. We conduct extensive experiments on five fine-grained benchmark datasets (CUB2011, Aircraft, NABirds, VegFru, Food101) for 12bits, 24bits, 32bits, 48bits hash codes. The results show that FCAENet achieves the state-of-the-art (SOTA) fine-grained image retrieval performance based on the hashing learning method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call