Abstract

The performance of Self Organizing Map (SOM) is always influenced by learn methods. The resultant quality of the SOM is also highly dependent onto the learning rate and the neighborhood function. In literature, there are plenty of studies to find a proper method to improve the quality of learning process of the SOM. They focus especially on convergence (Cottrell et al., 1998; Kohonen, 2001), the measure of the global topology preservation (Bauer et al., 1999) and, at an individual level, the sensitivity to parameters such as initialization, rate of decrease of neighborhood function, optimum learning rate etc. (de Bodt & Cottrell, 2000; Mulier & Cherkassky, 1995; Germen, 2005; Flanagan, 1996; 1994). Although various disciplines use the SOM model in order to find solutions to broad spectrum of problems, however, there is not so much clue about the how the resultant maps are supposed to look after training. Most articles are focused on the learning process of the SOM. The quality of the SOM is needed to measure in this process. The question is how to measure this quality. The distortion, or distortion measure, is certainly the most popular criterion for assessing the quality of the classification of the SOM (Kohonen, 2001; Rynkiewicz, 2006). Distortion measure provides an assessment of SOMproperties with respect to the data and overcomes the absence of cost function in the SOM algorithm. UsuallyMean Squared Error (MSE) is used to measure a distortion. The MSE is just a number without any dimension or scale, and may be hard to understand. What is the value of distortion is small enough? At what point should be the learning process terminated? Alternative approach for measurement of quality of learning process is the goal of our research. In the SOM each neuron represents a set of input vectors. As the learning process continues, the set should be more and more stable, i.e. particular input vector should not move from one set to another in successive iteration in learning process. Movements, which still occur, can measure the quality of the learning process or quality of resultant SOM, if we decide to stop the learning. Another idea is usage of a dimension reduction methods to capture most significant features of resultant SOM (Dvorský, 2007). At the beginning of the learning process, weights in the SOM are initialized with random values. In this case, there is no common, important feature in the SOM – the SOM contains only noise. How learning process continues, the map will learn significant features in the data. These features should be dominant, and if some approximation of the SOM is computed, these features must be preserved. In this moment, SVD or HOSVD, see sections 2.1 and 2.2, can be used to compute SOM approximation. In terms of 3

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.