Abstract

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.

Highlights

  • Unsupervised learning techniques have many applications, especially to complex problems that we would like to be solved automatically, but without already knowing what the correct responses are to begin with

  • In section Hopfield Neural Networks we briefly summarize the two main applications of Hopfield neural networks, which form the basis of self-optimization

  • In section A Review of Self-Optimization in Neural Networks we review existing work on self-optimization in neural networks, which largely remains within the classical formalism of the Hopfield neural network

Read more

Summary

Introduction

Unsupervised learning techniques have many applications, especially to complex problems that we would like to be solved automatically, but without already knowing what the correct responses are to begin with. One popular approach is self-modeling, for example a multi-legged robot that adapts its controller to its physical body by evaluating its sensory feedback against an internal simulation of its possible body morphology and how that body would interact with its environment (Bongard et al, 2006) Another approach, which avoids the use of an explicit internal model, is homeostatic adaptation: for example, a multi-legged robot with a homeostatic neural controller that will cycle through structural changes to the neural network until a motion pattern is found that permits it to maintain the neural activation states within the homeostatic range (Iizuka et al, 2015). It has the disadvantage that it is defined only negatively: something in the robot has to break down before the homeostatic mechanism springs to action and starts changing the connection weights of neurons until they recover stability, for example by applying Hebbian learning (Di Paolo, 2000)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call