Abstract

Vector of Locally Aggregated Descriptors (VLAD) method, which aggregates descriptors and produces a compact image representation, has achieved great success in the field of image classification and retrieval. However, the original VLAD method is a hard assignment strategy that only assigns each descriptor to the nearest neighbor visual word in dictionary, which leads to large quantization error. In this paper, improved VLAD based on adaptive bases and saliency weights is proposed to solve the above problem. The new method considers the local density distribution when assigning local descriptors, adaptively selects several nearest neighbor visual words, and takes the coding coefficients obtained by utilizing saliency as the weights of the selected visual words. Experimental results on Corel 10, 15 Scenes and UIUC Sports Event datasets show that the new coding method proposed in this paper achieves better classification performance compared with the existing five VLAD based methods and two commonly used representation methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.