Abstract

Robots and intelligent industrial systems that focus on sorting or inspection of products require end-effectors that can grasp and manipulate the objects surrounding them. The capability of such systems largely depends on their ability to efficiently identify the objects and estimate the forces exerted on them. This paper presents an underactuated, compliant, and lightweight hyper-adaptive robot gripper that can efficiently discriminate between different everyday life objects and estimate the contact forces exerted on them during a single grasp, using vision-based techniques. The hyper-adaptive mechanism consists of an array of movable steel rods that get reconfigured conforming to the geometry of the grasped object. The proposed object identification and force estimation techniques are model-free and do not rely on time consuming object exploration. A series of experiments have been carried out to discriminate between 12 different everyday life objects and estimate the forces exerted on a dynamometer. During each grasp, a series of images are captured that detect the reconfiguration of the hyper-adaptive grasping mechanism. These images are then used by an image processing algorithm to extract the required information about the gripper reconfiguration, classify the object grasped using a Random Forests (RF) classifier, and estimate the amount of force being exerted. The employed RF classifier gives a prediction accuracy of 100%, while the results of the force estimation techniques (Neural Networks, Random Forests, and 3rd order polynomial) range from 94.7% to 99.1%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call