Abstract
An alternative model named a codebook-excited neural network has been proposed for source coding or vector quantisation. Two advantages of this model are that the memory information between source frames can easily be taken into account by recurrent connections and that the number of network connections is independent of the transmission rate. The simulations have also shown its good quantisation performance. The codebook-excited neural network is applicable with any distortion measure. For a zero-mean, unit variance, memoryless Gaussian source and a squared-error measure, a 1 bit/sample two-dimensional quantiser with a codebook-excited feedforward neural network is found to always escape from the local minima and converge to the best one of the three local minima which are known to exist in the vector quantiser designed using the LBG algorithm. Moreover, due to its conformal mapping characteristic, the codebook-excited neural network can be applied to designing the vector quantiser with any required structural form on its codevectors. >
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have