Abstract

When a user and a robot share the same physical workspace the robot may need to keep an updated 3D representation of the environment. Indeed, robot systems often need to reconstruct relevant parts of the environment where the user executes manipulation tasks. This paper proposes a spatial attention approach for a robot manipulator with an eye-in-hand Kinect range sensor. Salient regions of the environment, where user manipulation actions are more likely to have occurred, are detected by applying a clustering algorithm based on Gaussian Mixture Models applied to the user hand trajectory. A motion capture sensor is used for hand tracking. The robot attentional behavior is driven by a next-best view algorithm that computes the most promising range sensor viewpoints to observe the detected salient regions, where potential changes in the environment have occurred. The environment representation is built upon the PCL KinFu Large Scale project [1], an open source implementation of KinectFusion. KinFu has been modified to support the execution of the next-best view algorithm directly on the GPU and to properly manage voxel data. Experiments are reported to illustrate the proposed attention based approach and to show the effectiveness of GPU-based next-best view planning compared to the same algorithm executed on the CPU.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call