Abstract
Vision based mapping is an emerging technology with decades of research advancements. The most famous mapping method available is silmuntaneous localization and mapping (SLAM) which provide an accurate map projected in a simulation. Unfortunately, SLAM requires an active sensor in order to acquire the data from its environment opposite the vision-based mapping which requires a passive sensor to collect data. This project aims to develop an autonomous mapping and exploration algorithm, design a controller for the robot-tracked vehicle and analyze the accuracy of the algorithm. The problem in autonomous mapping is precision, limitation of computational power and complex computation. So, the algorithm will be based on the visual odometer algorithm through a single-visual sensor. The robot tracker has also been designed and implemented on Raspberry Pi 3. The accuracy of two object with different height was calculated to ensure the validation of the algorithm being able to project the real object in 3D projection. The result for the task is shown in figures as to present the capability of the algorithm in projecting the map in 3D projection. The algorithm works as expected but still requires improvements to increase the precision of the map projection.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Bulletin of Electrical Engineering and Informatics
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.