Abstract

Vector quantization (VQ) has been used as an efficient and popular method in lossy image and speech compression (Gray, 1984; Nasrabadi & King, 1988; Gersho & Gray, 1992). In these areas, VQ is a technique that can produce results very close to the theoretical limits. The most widely used and simplest technique for designing vector quantizers is the LBG algorithm by Y. Linde et al. (1980). It is an iterative descent algorithm which monotonically decreases the distortion function towards a local minimum. Sometimes, it is also referred to as generalized Lloyd algorithm (GLA), since it is a vector generalization of a clustering algorithm due to Lloyd (1982). New algorithms for VQ based on associative memories or artificial neuronal networks (ANNs) have arisen as an alternative to traditional methods. Within this approach, many VQ algorithms have been proposed. We mention some of them. The self-organizing map (SOM) ANN, developed by Prof. Teuvo Kohonen in the early 1980s (Kohonen, 1981; Kohonen 1982), has been used with a great deal of success in creating new schemes for VQ. The SOM is a competitive-learning network. C. Amerijckx et al. (1998) proposed a lossy compression scheme for digital still images using Kohonen’s neural network algorithm. They applied the SOM at both quantification and codification stages of the image compressor. At the quantification stage, the SOM algorithm creates a correspondence between the input space of stimuli, and the output space constituted of the codebook elements (codewords, or neurons) derived using the Euclidean distance. After learning the network, these codebook elements approximate the vectors in the input space in the best possible way. At the entropy coder stage, a differential entropy coder uses the topology-preserving property of the SOMs resulted from the learning process and the hypothesis that the consecutive blocks in the image are often similar. In (Amerijckx et al., 2003), the same authors proposed an image compression scheme for lossless compression using SOMs and the same principles. Eyal Yair et al. (1992) provide a convergence analysis for the Kohonen Learning Algorithm (KLA) with respect to VQ optimality criteria and introduce a stochastic relaxation technique which produces the global minimum but is computationally expensive. By incorporating the

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call