Abstract
This paper describes how virtual tools that represent real robot end-effectors are used in conjunction with a generalized conglomerate-of-spheres approach to collision avoidance in such a way that telerobotic trajectory planning can be accomplished using simple gesture phrases such as ‘put that there while avoiding that’. In this concept, an operator (or set of collaborators) need not train for cumbersome telemanipulation on several multiple-link robots, nor do robots need a priori knowledge of operator intent and exhaustive algorithms for evaluating every aspect of a detailed environment model. The human does what humans do best during task specification, while the robot does what machines do best during trajectory planning and execution. Four telerobotic stages were implemented to demonstrate this strategic supervision concept that will facilitate collaborative control between humans and machines. In the first stage, virtual reality tools are selected from a ‘toolbox’ by the operator(s) and then these virtual tools are computationally interwoven into the live video scene with depth correlation. Each virtual tool is a graphic representation of a robot end-effector (gripper, cutter, or other robot tool) that carries tool-use attributes on how to perform a task. An operator uses an instrumented glove to virtually retrieve the disembodied tool, in the shared scene, and place it near objects and obstacles while giving key-point gesture directives, such as ‘cut there while avoiding that’. Collaborators on a network may alter the plan by changing tools or tool positioning to achieve preferred results from their own perspectives. When parties agree, from wherever they reside geographically, the robot(s) create and execute appropriate trajectories suitable to their own particular links and joints. Stage two generates standard joint-interpolated trajectories, and later creates potential field trajectories if necessary. Stage three tests for collisions with obstacles identified by the operator and modeled as conglomerates of spheres. Stage four involves automatic grasping (or cutting etc.) once the robot camera acquires a close-up view of the object during approach. In this paper particular emphasis is placed on the conglomerate-of-spheres approach to collision detection as integrated with the virtual tools concept for a Puma 560 robot by the Virtual Tools and Robotics Group in the Computer Integrated Manufacturing Laboratory at The Pennsylvania State University (Penn State).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.