Abstract
The encoding speed of vector quantization (VQ) is a time bottleneck to its practical applications due to it performing a lot of k-dimensional (kD) Euclidean distance computations. By using famous statistical features of the sum and the variance of a kD vector to estimate the Euclidean distance first, an IEENNS (improved equal-average equal-variance nearest neighbor search) method has been proposed to reject most of the unlikely codewords for a certain input vector. By dividing a kD vector in half to generate its two corresponding (k/2)D subvectors and then apply the IEENNS method again to each subvector, an SIEENNS (subvector-based IEENNS) method has been proposed as well. The SIEENNS method is, so far, the most search-efficient subvector-based encoding method for VQ, but it still has a large memory and computational redundancy. The paper aims at improving the state-of-the-art SIEENNS method by introducing a new 3-level data structure to reduce memory redundancy and by avoiding using the variances of two (k/2)D subvectors to reduce computational redundancy. Experimental results confirmed that the proposed method can reduce memory requirement for each kD vector from (k+6) to (k+1) and, at the same time, improve total search efficiency by 20-30% compared to the SIEENNS method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.