Abstract

We tackle the problem of robot localization by means of a deep learning (DL) approach. Convolutional neural networks, mainly devised for image analysis, are at the core of the proposed solution. An optical encoder neural network (OE-net) is devised to give back the relative pose of a robot by processing consecutive images. A monocular camera, fastened on the robot and oriented toward the floor, collects the vision data. The OE-net takes a pair of the acquired consecutive images as input and provides the relative pose information. The neural network is trained using a supervised learning approach. This preliminary study, made on synthetic images, suggests that a convolutional network and hence a DL approach can be a viable complement to the traditional visual odometry for robot ego-motion estimation. The obtained outcomes look very promising.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.