Abstract

This chapter provides an overview of the history of growing self-organizing networks. Starting from Kohonen's original work on the self-organizing map, various modifications and new developments are motivated and illustrated. Clustering huge data sets without knowing in advance the number of clusters is something incremental networks should excel at. The chapter discusses how the original growing neural gas (GNG) method has been enhanced by keeping a “strength” parameter with every augmenting edge each time a winner is connected to a second winner by an edge from the input signal. This makes it possible to identify frequently used edges instead of only noting the existence or non-existence of an edge. This information could be used for clustering in noisy environments. Since many competitive learning methods can be seen as special cases of the expectation-maximization (EM) algorithm, another interesting research area might be to use ideas from growing self-organizing networks, to develop new incremental variants of EM. Vice versa, one could try to incorporate EM in the self-organization process of growing networks. Since EM can be used for density estimation, there would be immediate applications for such methods in the area of pattern recognition.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.