Abstract

This work describes the development of a vision-based tactile sensor system that utilizes the image-based information of the tactile sensor in conjunction with input loads at various motions to train the neural network for the estimation of tactile contact position, area, and force distribution. The current study also addresses pragmatic aspects, such as choice of the thickness and materials for the tactile fingertips and surface tendency, etc. The overall vision-based tactile sensor equipment interacts with an actuating motion controller, force gauge, and control PC (personal computer) with a LabVIEW software on it. The image acquisition was carried out using a compact stereo camera setup mounted inside the elastic body to observe and measure the amount of deformation by the motion and input load. The vision-based tactile sensor test bench was employed to collect the output contact position, angle, and force distribution caused by various randomly considered input loads for motion in X, Y, Z directions and rotational motion. The retrieved image information, contact position, area, and force distribution from different input loads with specified 3D position and angle are utilized for deep learning. A convolutional neural network VGG-16 classification modelhas been modified to a regression network model and transfer learning was applied to suit the regression task of estimating contact position and force distribution. Several experiments were carried out using thick and thin sized tactile sensors with various shapes, such as circle, square, hexagon, for better validation of the predicted contact position, contact area, and force distribution.

Highlights

  • Vision-based processing has been a part of inference in many interdisciplinary fields of research [1,2,3]

  • The tactile sensors can perceive the physical aspects of any object, which guides the handling of the object in terms of strength applied to interact with them [7]

  • The primary purpose of this study is to develop a learned vision-based tactile sensor mechanism that uses indirect contact with the object to estimate the force and contact position of the impact when the object is interacting with the elastic body

Read more

Summary

Introduction

Vision-based processing has been a part of inference in many interdisciplinary fields of research [1,2,3]. The correlation between the input load force, contact position and transformed image captured by the camera sensor can be learned throughout various scenarios [11] In this case, the vision-based tactile sensor technology gets rid of the need for the usage of separate traditional array type tactile sensor strips which are usually less durable and prone to large signal processing burden and breakage [12]. The vision-based tactile sensor technology gets rid of the need for the usage of separate traditional array type tactile sensor strips which are usually less durable and prone to large signal processing burden and breakage [12] This type of visual-based tactile sensor is more like a single element type with no physical interaction with the elastic body. The industrial vision-based tactile sensor equipment used in this study is depicted, along with the transformed stereo image pair caused due to deformations on the elastic body

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call