Abstract

Prior research has shown that representations of retinal surfaces can be learned from the intrinsic structure of visual sensory data in neural simulations, in robots, as well as by animals. Furthermore, representations of cochlear (frequency) surfaces can be learned from auditory data in neural simulations. Advances in hardware technology have allowed the development of artificial skin for robots, realising a new sensory modality which differs in important respects from vision and audition in its sensorimotor characteristics. This provides an opportunity to further investigate ordered sensory map formation using computational tools. We show that it is possible to learn representations of non-trivial tactile surfaces, which require topologically and geometrically involved three-dimensional embeddings. Our method automatically constructs a somatotopic map corresponding to the configuration of tactile sensors on a rigid body, using only intrinsic properties of the tactile data. The additional complexities involved in processing the tactile modality require the development of a novel multi-dimensional scaling algorithm. This algorithm, ANISOMAP, extends previous methods and outperforms them, producing high-quality reconstructions of tactile surfaces in both simulation and hardware tests. In addition, the reconstruction turns out to be robust to unanticipated hardware failure.

Highlights

  • Spatial projections of various sensory surfaces onto neural structures are common in neuroanatomy, where they are known as topographic maps

  • Sensoritopic map formation involves self-organising processes which are guided by sensory signals: ferrets can develop retinotopic maps on the auditory cortex, if their visual afferent nerves are surgically rerouted to their auditory cortex [2]; in mice, spontaneous in utero waves of activation on the retina are involved in topographic map formation [3]

  • Note that one of the sensors in the physical prototype failed between experimental runs; data from this sensor was included in the input to the reconstruction methods for all hardware experiments, allowing us to observe the performance of the algorithms in the face of hardware failure

Read more

Summary

Introduction

Spatial projections of various sensory (and motor) surfaces onto neural structures are common in neuroanatomy, where they are known as topographic maps. In the primary visual cortex (V1), neighbouring cells in the retina activate neighbouring cortical columns (retinotopy). Sensoritopic map formation involves self-organising processes which are guided by sensory signals: ferrets can develop retinotopic maps on the auditory cortex, if their visual afferent nerves are surgically rerouted to their auditory cortex [2]; in mice, spontaneous in utero waves of activation on the retina are involved in topographic map formation [3]. Simulations of retinotopic map formation based on self-organising maps have been claimed to accurately model the visual cortex, including reproducing features such as ocular dominance maps and visual after-effects [4]. Self-organising maps have been used to model tonotopic features of the auditory cortex in certain bats [5]

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.