Finite-state vector quantization (FSVQ) is known to give better performance than the memoryless vector quantization (VQ). This paper presents a new FSVQ scheme, called finite-state residual vector quantization (FSRVQ), in which each state uses a residual vector quantizer (RVQ) to encode the input vector. This scheme differs from the conventional FSVQ in that the state-RVQ codebooks encode the residual vectors instead of the original vectors. A neural network predictor estimates the current block based on the four previously encoded blocks. The predicted vector is then used to identify the current state as well as to generate a residual vector (the difference between the current vector and the predicted vector). This residual vector is encoded using the current state-RVQ codebooks. A major task in designing our proposed FSRVQ is the joint optimization of the next-state codebook and the state-RVQ codebooks. This is achieved by introducing a novel tree-structured competitive neural network in which the first layer implements the next-state function, and each branch of the tree implements the corresponding state-RVQ. A joint training algorithm is also developed that mutually optimizes the next-state and the state-RVQ codebooks for the proposed FSBVQ. Joint optimization of the next-state function and the state-RVQ codebooks eliminates a large number of redundant states in the conventional FSVQ design; consequently, the memory requirements are substantially reduced in the proposed FSRVQ scheme. The proposed FSRVQ can be designed for high bit rates due to its very low memory requirements and the low search complexity of the state-RVQ's. Simulation results show that the proposed FSRVQ scheme outperforms conventional FSVQ schemes both in terms of memory requirements and the visual quality of the reconstructed image. The proposed FSRVQ scheme also outperforms JPEG (the current standard for still image compression) at low bit rates.
Read full abstract