Abstract

Deep hashing aims to produce discriminative binary hash codes for fast image retrieval through a deep baseline network and additional trainable hash function. In a supervised deep hashing network, the baseline network is generally initialized with classification-based pretrained models, and the overall hashing network is trained in a supervised fashion. However, since classification and retrieval are two different tasks, it is necessary to reconsider the initial model for the baseline network. In this paper, we propose to use a self-supervised pretrained model as the baseline for the first time. We investigate the impact of pretrained model types by comparing deep hashing networks that use the baseline network with 1) randomly initialized weights, 2) conventional supervised pretrained weights, and 3) proposed self-supervised pretrained weights. As a result, we confirm that the performance of deep hashing differs depending on the initial baseline setting, and the proposed self-supervised baseline model shows comparable or better performance over the supervised one. Our code is released at https://github.com/HaeyoonYang/SSPH.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call