Abstract

Although deep compositional features have achieved an amazing performance in many application scenarios, it is not easy for them to directly tackle big data case due to their high computing time and memory usage. This paper presents a comparative evaluation for the simplification of deep compositional features by exploring the existing vector quantization and binarization techniques. Different techniques display different capabilities in similarity-preserving or memory-saving when projecting the original deep compositional features into more compact visual words or binary codes. We propose a dedicated image searching framework to evaluate all the techniques in terms of computational cost, memory usage and discrimination preserving. Extensive experiments demonstrate that it is feasible to greatly reduce computational cost and memory usage of the deep compositional features while preserving enough discriminative power. In addition, some useful conclusions are derived to guide the design of better simplifying schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call