Abstract

Vector quantization (VQ) is a well-known method for signal compression. One of the main problems remaining unsolved satisfactorily in a VQ compression system is its encoding speed, which seriously constrains the practical applications of the VQ method. The reason is that in its encoding process VQ must perform many k-dimensional (k-D) expensive Euclidean distance computations so as to determine a best-matched codeword in the codebook for the input vector by finding the minimum Euclidean distance. Apparently, the most straightforward method in a VQ framework is to deal with a k-D vector as a whole vector. By using the popular statistical features of the sum and the variance of a k-D vector to estimate real Euclidean distance first, the IEENNS method has been proposed to reject most of the unlikely candidate codewords for a given input vector. Because both the sum and the variance are approximate descriptions for a vector and they are more precise when representing a shorter vector, it is better to construct the partial sums and the partial variances by dealing with a k-D vector as two lower dimensional subvectors to replace the sum and the variance of a whole vector. Then, by equally dividing a k-D vector in half to generate its two corresponding (k/2)-D subvectors and applying the IEENNS method again to each of the subvectors, the IEENNS method has been proposed recently. The SIEENNS method is so far the most search-efficient subvector-based encoding method for VQ, but it still has a large memory and computational redundancy. This paper aims at improving the state-of-the-art SIEENNS method by (1) introducing a new 3-level data structure to reduce the memory redundancy; (2) avoiding using two partial variances of the two (k=2)-D subvectors to reduce the computational redundancy, and (3) combining two partial sums of the two (k=2)-D subvectors together to enhance the capability of the codeword rejection test. Experimental results confirmed that the proposed method in this paper can reduce the total memory requirement for each k-D vector from (k + 6) to (k + 1) and meanwhile remarkably improve the overall search efficiency to 72.3–81.1% compared to the SIEENNS method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.