Abstract

Self-organizing neural networks are characterized by topology preservation, dynamic adaptation, clustering, and dimensionality reduction, which prompt their wide application in data mining, knowledge extraction, and image processing. However, existing self-organizing neural networks fail to automatically generate an output space that contains an appropriate number of neurons according to the data input. To address this problem, this paper proposes a growing neural gas (GNG) algorithm with adaptive output network scale, which is called scale-adaptive GNG (SA-GNG) algorithm. The learning process of SA-GNG is divided into two stages: growth and convergence. At the growth stage, distortion error stability is introduced to objectively judge the degree of approximation of the output network to the input space, so that SA-GNG can grow neurons on demand until significant improvement is no longer made for distortion errors. In the convergence stage, neurons are not allowed to be produced, and the similarity between the output network and the input data is improved through continuous learning. SA-GNG promises to autonomously generate an appropriate number of neurons according to the size of the input data, with no need of determining the total number of neurons to be generated in advance, thereby greatly improving its adaptability. As such, the algorithm is especially suitable for the application scenarios where the amount of the data to be input is unknown. The validity and feasibility of the algorithm proposed in this paper are verified by experiments.

Highlights

  • The topological structure of images that does not change with parameters such as image size and direction is considered to be an important feature applied to image analysis and recognition [1]

  • This paper proposes the SA-growing neural gas (GNG) and introduces a judgment index that can evaluate node growth in the output network

  • This index allows scale-adaptive GNG (SA-GNG) to adapt to the input space autonomously, with no need of knowing the distribution of the input space in advance nor presetting the scale of the output network in advance

Read more

Summary

INTRODUCTION

The topological structure of images that does not change with parameters such as image size and direction is considered to be an important feature applied to image analysis and recognition [1]. Distortion error stability is used to objectively judge the degree of approximation of the output space to the input space With this index, the SA-GNG algorithm can autonomously adapt to the scale of the input space and generate an appropriate number of neurons, with no need of presetting the scale of the output network. SA-GNG does not create new neurons and instead, uplift the similarity of feature extraction through continuous learning In these two stages, SA-GNG autonomously generates an appropriate number of neurons according to the volume of the input data, unnecessary to pre-set the total number of neurons. SA-GNG can effectively and autonomously extract an output network that has an appropriate number of neurons from different images while maintaining topological features, so it is suitable for application scenarios where the scale and structure of input data are unknown.

PROBLEM PROPOSED
THE CONCEPT OF DISTORTION ERROR
THE CALCULATION OF DISTORTION ERROR STABILITY
TOPOLOGY PRESERVATION MEASUREMENT BASED ON MAXIMUM ENTROPY PRINCIPLE
EXPERIMENTS AND DISCUSSION
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call