Abstract

The cognitive connection between the senses of touch and vision is probably the best-known case of multimodality. Recent discoveries suggest that the mapping between both senses is learned rather than innate. This evidence opens the door to a dynamic multimodality that allows individuals to adaptively develop within their environment. By mimicking this aspect of human learning, we propose a new multimodal mechanism that allows artificial cognitive systems (ACS) to quickly adapt to unforeseen perceptual anomalies generated by the environment or by the system itself. In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. However, this has not been the case for the haptic modality, where the lack of two-handed dexterous datasets has limited the ability of learning systems to process the tactile information of human object exploration. This data imbalance hinders the creation of synchronized datasets that would enable the development of multimodality in ACS during object exploration. In this work, we use a multimodal dataset recently generated from tactile sensors placed on a collection of objects that capture haptic data from human manipulation, together with the corresponding visual counterpart. Using this data, we create a multimodal learning transfer mechanism capable of both detecting sudden and permanent anomalies in the visual channel and maintaining visual object recognition performance by retraining the visual mode for a few minutes using haptic information. Our proposal for perceptual awareness and self-adaptation is of noteworthy relevance as can be applied by any system that satisfies two very generic conditions: it can classify each mode independently and is provided with a synchronized multimodal data set.

Highlights

  • We present an Artificial Cognitive System (ACS) that builds on a multimodality ability using human manipulation data, achieving perceptual awareness and a dynamic capacity to adapt to changing environments

  • Every sample is stored as a 24-bit array, hj, called haptic state of the j sample, one bit for each copper pad (Fig. 2b), the system has no information about the relationship between the location of the 24 sensors and the positions of their statuses in the array

  • Since each sensor has its position inside the array, the resulting state would vary if sensors were placed differently

Read more

Summary

Introduction

The cognitive connection between the senses of touch and vision is probably the best-known case of multimodality. By mimicking this aspect of human learning, we propose a new multimodal mechanism that allows artificial cognitive systems (ACS) to quickly adapt to unforeseen perceptual anomalies generated by the environment or by the system itself In this context, visual recognition systems have advanced remarkably in recent years thanks to the creation of large-scale datasets together with the advent of deep learning algorithms. Synchronized haptic and visual data allow to define and implement a new adaptive and autonomous system in changing environments To achieve this objective, we designed and printed novel 3D objects that collect human exploration data with multiple capacitive touch sensors on the surface of the objects. Our findings suggest that with the implantation of biologically inspired multimodality, the ACS becomes perceptually aware of a faulty sensory modality and autonomously adapts to changing environments without losing performance

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call