Abstract

Using evolutionary simulations, we develop autonomous agents controlled by artificial neural networks (ANNs). In simple lifelike tasks of foraging and navigation, high performance levels are attained by agents equipped with fully recurrent ANN controllers. In a set of experiments sharing the same behavioral task but differing in the sensory input available to the agents, we find a common structure of a command neuron switching the dynamics of the network between radically different behavioral modes. When sensory position information is available, the command neuron reflects a map of the environment, acting as a location-dependent cell sensitive to the location and orientation of the agent. When such information is unavailable, the command neuron's activity is based on a spontaneously evolving short-term memory mechanism, which underlies its apparent place-sensitive activity. A two-parameter stochastic model for this memory mechanism is proposed. We show that the parameter values emerging from the evolutionary simulations are near optimal; evolution takes advantage of seemingly harmful features of the environment to maximize the agent's foraging efficiency. The accessibility of evolved ANNs for a detailed inspection, together with the resemblance of some of the results to known findings from neurobiology, places evolved ANNs as an excellent candidate model for the study of structure and function relationship in complex nervous systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call