Abstract

Memoryless vector quantization (VQ) is the process of representing a continuous-valued vector by a vector function of a discrete-valued index. A novel system which performs VQ stochastically is introduced. This system uses an encoder which produces, for a given vector to be quantized, a probability distribution over the N indices, then randomly chooses one of the N indices using that distribution. The structure used to compute the distribution is a feedforward neural network classifier. The decoder is a simple codebook lookup exactly as in plain VQ. The performance of this memoryless stochastic scheme must be suboptimal. By taking the error to be minimized as the expected value, over the set of random choices, of the error between original vector and decoder output, it is possible to find the error gradient over the encoder/classifier's outputs and decoder's codebook vectors. Then gradient descent (backpropagation) may be used to train all the system's parameters in order to minimize the error. When the data to be quantized is a correlated sequence of vectors, each vector could be quantized separately by a memoryless VQ system. However, performance can be improved in theory by using some state information about the past. The usual state-feedback extension of plain VQ is finite-state vector quantization (FSVQ). Unfortunately, there is no known optimal training procedure for FSVQ. The stochastic VQ scheme can also be extended to use feedback state information. Unlike for FSVQ, the optimal training scheme (given the structure of the system) is known. By using gradient descent (backpropagation-through-time), the system can be trained to optimize its parameters. Experimental results show that this stochastic system with state-feedback can perform up to 0.45 dB better than FSVQ on a Gauss-Markov source.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call