Abstract

In recent years, the growth of the computational power available in the Central Processing Units (CPUs) of consumer computers has tapered significantly. At the same time, growth in the computational power available in the Graphics Processing Units (GPUs) has remained strong. Algorithms that can be implemented on GPUs today are not only limited to graphics processing, but include scientific computation and beyond. This paper is concerned with massively parallel implementations of incremental sampling-based robot motion planning algorithms, namely the widely-used Rapidly-exploring Random Tree (RRT) algorithm and its asymptotically-optimal counterpart called RRT*. We demonstrate an example implementation of RRT and RRT* motion-planning algorithm for a high-dimensional robotic manipulator that takes advantage of an NVidia CUDA-enabled GPU. We focus on parallelizing the collision-checking procedure, which is generally recognized as the computationally expensive component of sampling-based motion planning algorithms. Our experimental results indicate significant speedup when compared to CPU implementations, leading to practical algorithms for optimal motion planning in high-dimensional configuration spaces.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call