Abstract
The quantization property of layered neural networks is studied in this paper. We first review the layered neural network-based coders or quantizers developed in recent years and show that their poor performances is due to their independent training schemes for each component in the coder. Then an alternative model named a codebook-excited neural network is proposed, where an encoded vector is approximated by the output of the network driven by one vector selected from an excitation codebook. The network and the excitation codebook are jointly trained with the error back-propagation algorithm. Simulations with a Gauss-Markov source demonstrate that the quantization performance of the codebook-excited feedforward neural network is not worse than that of the connectionist vector quantizer formed by a set of single-layer neural units which satisfied the optimal quantization conditions, and that the performance of the codebook-excited recurrent neural network is very close to the asymptotic performance bound of block quantizers. The codebook-excited neural network is applicable with any distortion measure. For a zero-mean, unit variance, memory-less Gaussian source and a squared-error measure, a 1 bit/sample two-dimensional quantizer with a codebook-excited feedforward neural network is found always to escape from the local minima and converge to the best one of the three local minima which are known to exist in the vector quantizer designed using the LBG algorithm. Moreover, due to its conformal mapping characteristic, the codebook-excited neural network can be applied to designing the vector quantizer with any required structural form on its codevectors.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have