Abstract
The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot.
Highlights
Living systems, which have to survive in a complex, permanently changing environment must exhibit a life-sustaining behavior
We demonstrate the operation of networks of these modules for the control of behavior in the sensorimotor loop
DYNAMICS OF SELF-REGULATING NEURONS To get a first impression of the SRN-dynamics we study the dynamics of a single neuron with and without self-connection
Summary
Living systems, which have to survive in a complex, permanently changing environment must exhibit a life-sustaining behavior. For autonomous agents, such as animats, this is one of the desired capacities. For achieving this objective, autonomous agents are equipped with different types of sensors, with proprioceptors monitoring their internal states, and with motors to articulate their body movements. Since every movement of the body changes the inputs to the sensors and proprioceptors, these agents always operate in a sensorimotor loop. Examples include tropisms of wheeldriven robots (Hülse and Pasemann, 2002; Smith et al, 2002), biped walking (Manoonpong et al, 2007; Kubisch et al, 2011), active tracking (Negrello and Pasemann, 2008), quadruped locomotion, (Manoonpong et al, 2006; Ijspeert et al, 2007; Shim and Husbands, 2012), hexapod locomotion (Beer and Gallagher, 1992), and swimming robots (Ijspeert et al, 2007; Shim and Husbands, 2012)
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have