Abstract

Vision-based measurement and prediction (VMP) are the very important and challenging part for autonomous robotic manipulation, especially in dynamic and uncertain scenarios. However, due to the potential limitations of visual measurement in such an environment such as occlusion, lighting, and hardware limitations, it is not easy to acquire the accurate positions of an object as the observations. Moreover, manipulating a dynamic object with unknown or uncertain motion rules usually requires an accurate prediction of motion trajectory at the desired moment, which dramatically increases the difficulty. To address the problem, we propose a time granularity-based vision prediction framework whose core is an integrated prediction model based on multiple [i.e., long short-term memory (LSTM)] neural networks. At first, we use the vision sensor to acquire raw measurements and adopt the preprocessing method (e.g., data completion, error compensation, and filtering) to turn raw measurements into the standard trajectory data. Then, we devise a novel integration strategy based on time granularity boost (TG-Boost) to select appropriate base predictors and further utilize these history trajectory data to construct the high-precision prediction model. Finally, we use the simulation and a series of dynamic manipulation experiments to validate the proposed methodology. The results also show that our method outperforms the state-of-the-art prediction algorithms in terms of prediction accuracy, success rate, and robustness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call