Abstract
In robotic grasping and manipulation, the knowledge of a precise object pose represents a key issue. The point acquires even more importance when the objects and, then, the grasping areas become smaller. This is the case of Deformable Linear Object manipulation application where the robot shall autonomously work with thin wires which pose and shape estimation could become difficult given the limited object size and possible occlusion conditions. In such applications, a vision-based system could not be enough to obtain accurate pose and shape estimation. In this work the authors propose a Time-of-Flight pre-touch sensor, integrated with a previously designed tactile sensor, for an accurate estimation of thin wire pose and shape. The paper presents the design and the characterization of the proposed sensor. Moreover, a specific object scanning and shape detection algorithm is presented. Experimental results support the proposed methodology, showing good performance. Hardware design and software applications are freely accessible to the reader.
Highlights
Given the move from structured, safe and controlled robotic cells to unstructured, dynamic and human-shared environments, robots are being called to perform even more complex tasks, often mimicking human beings or collaborating with them
From vision-based and tactile sensors, pre-touch sensors operate at an intermediate range, providing benefits of both the mentioned class of sensors: mounted to the robot end-effector, they are more robust against occlusion than cameras; mounted at a closer range, they may potentially provide more precise measurements; similar to camera/depth sensors, they do not require to get in contact with objects
A low pass filter (LPF) is implemented at firmware level and its output is provided to the PC application; A filtering process based on the Signal-to-Noise Ratio (SNR) is implemented at application level, i.e., when the SNR of a single sample is too low the sample is discarded
Summary
Given the move from structured, safe and controlled robotic cells to unstructured, dynamic and human-shared environments, robots are being called to perform even more complex tasks, often mimicking human beings or collaborating with them. Grasping and manipulation are still challenging due to intrinsic difficulties with accurately perceiving objects of different shapes and sizes in cluttered environments With these type of applications vision/depth sensors can capture positional and geometric information, but they are affected by problems related to occlusions and calibration errors. From vision-based and tactile sensors, pre-touch sensors operate at an intermediate range, providing benefits of both the mentioned class of sensors: mounted to the robot end-effector, they are more robust against occlusion than cameras; mounted at a closer range, they may potentially provide more precise measurements; similar to camera/depth sensors, they do not require to get in contact with objects In these terms, through specific scanning strategy pre-touch sensors would enable robots to acquire geometric information of an object necessary to estimate its pose and shape and, to perform grasping actions, manipulation and re-grasping by exploiting more accurate data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.