Abstract

Underactuated hands are useful tools for robotic in-hand manipulation tasks due to their capability to seamlessly adapt to unknown objects. To enable robots using such hands to achieve and maintain stable grasping conditions even under external disturbances while keeping track of an in-hand object’s state requires learning object-tactile sensing data relationships. The human somatosensory system combines visual and tactile sensing information in their “What and Where” subsystem to achieve high levels of manipulation skills. The present paper proposes an approach for estimating the pose of in-hand objects combining tactile sensing data and visual frames of reference like the human “What and Where” subsystem. The system proposed here uses machine learning methods to estimate the orientation of in-hand objects from the data gathered by tactile sensors mounted on the phalanges of underactuated fingers. While tactile sensing provides local information about objects during in-hand manipulation, a vision system generates egocentric and allocentric frames of reference. A dual fuzzy logic controller was developed to achieve and sustain stable grasping conditions autonomously while forces were applied to in-hand objects to expose the system to different object configurations. Two sets of experiments were used to explore the system capabilities. On the first set, external forces changed the orientation of objects while the fuzzy controller kept objects in-hand for tactile and visual data collection for five machine learning estimators. Among these estimators, the ridge regressor achieved an average mean squared error of . On the second set of experiments, one of the underactuated fingers performed open-loop object rotations and data recorded were supplied to the same set of estimators. In this scenario, the Multilayer perceptron (MLP) neural network achieved the lowest mean squared error of .

Highlights

  • Widespread in controlled environments such as industries, robots are moving towards unstructured settings like homes, schools, and hospitals performing high-level, complex, and fast reasoning; several challenges remain unsolved for robot skills achieving human-level capabilities [1]. robots can accurately perform several tasks such as walk, pick and place objects, understand, and communicate with people, they still present a lack of hand dexterity

  • One finger performed in-hand manipulations, rotating the object while it was under a stable grasping

  • It is possible to observe in-hand manipulations promoted by the finger changes the object angle four times

Read more

Summary

Introduction

Robots can accurately perform several tasks such as walk, pick and place objects, understand, and communicate with people, they still present a lack of hand dexterity. Grasping and manipulating objects is a distinctive part of the human-being skill set. It is an ability evolved from the erect posture that freed our upper limbs, turning our hands into two sophisticated sets of tools [3]. During pick and place tasks, the goal of robotic platforms is to change the position and orientation of an object inside the manipulator’s workspace. In-hand manipulation is the ability to change the pose of objects, from initial orientation to a given one, within one hand. Underactuated hands arise as an option that can achieve a reasonable level of dexterity with simplicity

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call