Abstract

We tackle the problem of robot localization by means of a deep learning (DL) approach. Convolutional neural networks, mainly devised for image analysis, are at the core of the proposed solution. An optical encoder neural network (OE-net) is devised to give back the relative pose of a robot by processing consecutive images. A monocular camera, fastened on the robot and oriented toward the floor, collects the vision data. The OE-net takes a pair of the acquired consecutive images as input and provides the relative pose information. The neural network is trained using a supervised learning approach. This preliminary study, made on synthetic images, suggests that a convolutional network and hence a DL approach can be a viable complement to the traditional visual odometry for robot ego-motion estimation. The obtained outcomes look very promising.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call