Abstract

Abstract Many robotic processes require the system to maintain a tool’s orientation and distance from a surface. To do so, researchers often use virtual fixtures (VFs) to either guide the robot along a path or forbid it from leaving the workspace. Previous efforts relied on volumetric primitives (planes, cylinders, etc.) or raw sensor data to define VFs. However, those approaches only work for a small subset of real-world objects. Extending this approach is complicated not only by VF generation but also by generalizing user traversal of the VF to command a robot trajectory remotely. In this study, we present the concept of task VFs, which convert layers of point cloud-based Guidance VF into a bidirectional graph structure and pair it with a Forbidden Region VF. These VFs are hardware-agnostic and can be generated from virtually any source data, including from parametric objects (superellipsoids, supertoroids, etc.), meshes (including from computer-aided design (CAD)), and real-time sensor data for open-world scenarios. We address surface convexity and concavity since these and distance to the task surface determine the size and resolution of VF layers. This article then presents the manipulator-to-task transform tool for task VF visualization and to limit human–robot interaction ambiguities. Testing confirmed generation success and users performed spatially discrete experiments to evaluate task VF usability complex geometries, which showed their interpretability. The manipulator-to-task transform tool applies many robotic applications, including collision avoidance, process design, training, task definition, etc. for virtually any geometry.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call