Abstract

Aiming at indoor environment, a robot visual odometry method based on instance segmentation feature points is proposed. Unlike the general visual odometry feature point extraction method, we first use the convolutional neural network model YOLACT to segment images at pixel level to obtain the mask and semantic names of key objects. Then, ORB feature points are extracted from object mask. Bidirectional nearest neighbor fusion semantic name is used to match the feature points of adjacent frames. Finally, the minimum reprojection error equation is established to calculate the camera pose and spatial point parameters. Through experiments in different indoor environments and compared with the general visual odometry method, the results show that the algorithm has higher positioning accuracy and lower positioning error with the growth rate of distance, which is suitable for robot positioning and navigation in indoor scenes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call