Abstract

Recently, quantization has been an effective technique for large-scale image retrieval, which can encode feature vectors into compact codes. However, it is still a great challenge to improve the discriminative capability of codewords while minimizing the quantization error. This letter proposes Dual Distance Optimized Deep Quantization (D <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2</sup> ODQ) to deal with this issue, by minimizing the Euclidean distance between samples and codewords, and maximizing the minimum cosine distance between codewords. To generate the evenly distributed codebook, we find the general solution for the upper bound of the minimum cosine distance between codewords. Moreover, scaler constrained semantics-preserving loss is considered to avoid trivial quantization boundary, and ensure that a codeword can only quantize the features of one category. In contrast to state-of-the-art methods, our method has a better performance on three benchmark datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.