Abstract

In recent years, binary hashing methods have been widely used in large-scale multimedia retrieval because of the low computational complexity and memory cost. Generally, better retrieval accuracy can be achieved with a longer hash code, which, however, may suffer redundancy. In this paper, we propose a novel hash bit selection method, called Hash Bit Selection with Reinforcement Learning (HBS-RL), which aims to adaptively select the most informative bits from the database binary codes. In our approach, the hash bit selection problem is firstly modeled as a Markov Decision Process (MDP), which is solved with reinforcement learning. HBS-RL learns a policy for bit selection, which effectively identifies the most informative bits by directly maximizing mean Average Precision (mAP) during training. Specially, given a generated bit pool, our HBS-RL can sequentially select bits with different code lengths with a very lightweight fully-connected policy network. The proposed method is evaluated on the MNIST, CIFAR-10, ImageNet and NUS-WIDE datasets, and the results show that it significantly improves the retrieval performance of the existing unsupervised and deep supervised hashing methods. It also outperforms the state-of-the-art bit selection methods. For convenience of repeating our results, we release our source code at: <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/xyez/HBS-RL</uri> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call