Abstract

In higher animals an increasingly complex hierarchy of visual receptive fields exists from early to higher visual areas, where visual input becomes more and more indirect. From there on the system propagates its activity again via many stages to the end-effectors (muscles). On the other hand, recently it has been pointed out that in simple animals like flies a motor neuron can have a visual receptive field (Krapp und Huston, 2005), hence a motor neuron can have a sensor property. Such receptive fields directly generate behaviour. Thus, these neurons close - without intermediate stages - directly the perception-action loop and create feedback to the sensors again. In the first part of this thesis we will show that it is possible to develop such directly coupled sensor-motor receptive fields in simple behavioural systems by ways of a correlation based temporal sequence learning algorithm. The main goal is to demonstrate that learning generates stable behaviour and that the resulting receptive fields are also stable as soon as the newly learned behaviour is successful. The development of both, stable neuronal properties and stable behaviour, represents a difficult problem because convergence of functional neuronal properties and of behaviour has to be guaranteed at the same time. This work is a first attempt towards a solution of this problem shown by a simple robot system. This part of the thesis is concluded by starting to address the question how indirect sensor-motor coupling, like in higher animals, can be established. By implementing simple chained learning architectures, we will demonstrate that similar results can also be obtained, even for secondary receptive fields, which receive indirect visual input. In the second part of this thesis we will quantitatively analyse closed-loop learning systems which perform temporal sequence learning as presented in the first part. Here we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? Understanding closed-loop behavioural systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 50s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. To address the question stated above, we will investigate simulated agents by using energy and entropy measures and looking at their development during learning. This way we can show that within well specified scenarios there are indeed learning agents which are optimal with respect to their structure and adaptive properties. We will also show that analytical solutions can be found for the temporal development of such agents for relatively simple cases. In first two parts we analyse systems which use uni-modal sensory input (visual or somatosensory). In the third and the last part of this thesis we will investigate how multi-modal sensor integration influences development of the receptive fields and behavioural performance. This is motivated by experiments with rodents which demonstrate that although visual cues play an important role in the formation of hippocampal place cells and spatial navigation, rats also can rely on olfactory, auditory, somatosensory and self-motion cues. Here, for the first time we present a place cell model where we combine visual with olfactory cues in order to form place fields. This is realised by using a simple feed-forward network and a winner-takes-all learning mechanism. We solve a goal navigation task by using proposed navigation mechanism based on self-marking by odour patches combined with a Q-learning algorithm. We show that olfactory cues play an important role in the formation of the place fields and demonstrate that a combination of visual and olfactory cues together with a mixed navigation strategy improves goal directed navigation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call