Abstract

Vision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer vision techniques; however, their physical design and the information they provide do not yet meet the requirements of real applications. We present a robust, soft, low-cost, vision-based, thumb-sized three-dimensional haptic sensor named Insight, which continually provides a directional force-distribution map over its entire conical sensing surface. Constructed around an internal monocular camera, the sensor has only a single layer of elastomer over-moulded on a stiff frame to guarantee sensitivity, robustness and soft contact. Furthermore, Insight uniquely combines photometric stereo and structured light using a collimator to detect the three-dimensional deformation of its easily replaceable flexible outer shell. The force information is inferred by a deep neural network that maps images to the spatial distribution of three-dimensional contact force (normal and shear). Insight has an overall spatial resolution of 0.4 mm, a force magnitude accuracy of around 0.03 N and a force direction accuracy of around five degrees over a range of 0.03–2 N for numerous distinct contacts with varying contact area. The presented hardware and software design concepts can be transferred to a wide variety of robot parts.

Highlights

  • Robots have the potential to perform useful physical tasks in a wide range of application areas [1–4]

  • Photometric effects and structured lighting enable it to detect the tiny deformations of the sensor surface that are caused by physical contact

  • The contact force vectors could be numerically computed from the observed deformations according to elastic theory, but the material properties are not uniform, and the necessary assumption of a linear relationship between deformation and force [33, 40] is often violated

Read more

Summary

Introduction

Robots have the potential to perform useful physical tasks in a wide range of application areas [1–4]. To robustly manipulate objects in complex and changing environments, a robot must be able to perceive when, where, and how its body is contacting other things. Widely studied and highly successful for environment perception at a distance, centrally mounted cameras and computer vision are poorly suited to real-world robot contact perception due to occlusion and the small scale of the deformations involved. Robots need touch-sensitive skin, but few haptic sensors exist that are suitable for practical applications. Recent developments have shown that machine-learning-based approaches are especially promising for creating dexterous robots [2, 5, 6]. In such self-learning scenarios and real-world applications, the need for extensive data makes it critical that sensors are robust and keep providing good readings over thousands of hours of rough interaction. Machine learning opens new possibilities for tackling this haptic sensing challenge by replacing handcrafted numeric calibration procedures with end-to-end mappings learned from data [7]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call