Abstract

This paper presents optimization of self-organizing feature maps by adjusting tunable parameters and in the iterative process by utilizing linear algebra concepts. A gradient rule is applied on the weights matrix singular value decomposition values during convergence phases to optimize the algorithm while achieving sufficient statistical accuracy. Tunable parameters such as the learning rate, and neighborhood radius are adjusted to support the learning algorithm. The algorithm presented herein is tested on the self-organization of the standard Iris dataset, in the literature.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call