Abstract

This paper proposes a network called points-guided sampling net (PGSN) to guide the sampling process in sampling-based motion planner by utilizing the geometric information of obstacles. The geometric information is extracted from the point cloud of obstacles. By analyzing the properties of the point cloud, we propose a VAE feature extraction net that incorporates the variational autoencoder (VAE) framework with unique architectures designed for point clouds. Furthermore, we design a multi-modal sampling net to model the probability distribution of the states based on training trajectories taken from different environments. Based on PGSN, we propose a sampling-based motion planning algorithm called the point-guided rapidly-exploring random tree (PG-RRT). Three experiments are conducted to verify the proposed PGSN: Exp I shows the proposed VAE feature extraction net can successfully extract geometric features from the inputted point cloud; Exp II verifies the multi-modal sampling net successfully chooses corresponding mode with respect to extracted features; Exp III demonstrates the efficacy of our PG-RRT algorithm by showing PG-RRT outperforms other algorithms. Moreover, we provide theoretical analysis and insights towards understanding our model. Note to Practitioners—Obstacles cause lots of the sampling space invalid, thus the traditional sampling-based motion planning (SBMP) algorithm is usually unable to generate a trajectory within a reasonable short period of time. To improve the success rate and efficiency of SBMP, this paper proposes a novel deep neural network called points-guided sampling net (PGSN). PGSN is designed to exploit: (1) environmental point clouds and (2) training trajectories from multiple environments with different obstacles. In the first step, the point clouds include important geometric information. To utilize this information, we adopt a variational autoencoder approach which combines an encoder and a decoder together to extract geometric features more accurately from point clouds. In the second step, trajectories from multiple environments have a multi-modal property which can be represented by a truncated multivariate Gaussian mixture model. We propose a multi-modal sampling net to learn optimal parameters of this model from the training trajectories, and to select corresponding mode based on the extracted features. Experiments demonstrate that the proposed algorithm is feasible and can achieve higher success rate than the state-of-the-art methods. Our method uses a single frame of point cloud to improve efficiency, therefore multiple point clouds from different perspective maybe needed when objects occlude with each other.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call