Robots can now locate themselves in unfamiliar environments and concurrently create maps of their surroundings thanks to simultaneous localization and mapping (SLAM), which is utilized in various fields, including robotics and mapping. The most common SLAM methods are the LiDAR-SLAM, which uses the Light Detection and Ranging (LiDAR) sensor, and the Visual SLAM (VSLAM), which uses the camera sensor is used. VSLAM is now receiving great interest and research thanks to its advantages, such as its low cost compared to LiDAR, low energy consumption, durability, and rich environmental information. This study aims to produce a three-dimensional (3D) model of an indoor area using image data captured by the stereo camera located on the Unmanned Ground Vehicle (UGV). Easily measured objects from the field of operation were chosen to assess the generated model’s accuracy. The actual dimensions of the objects were measured, and these values were compared to those derived from the VSLAM-based 3D model. When the data were evaluated, it was found that the size of the object produced from the model could be varied by ±2cm. The surface accuracy of the 3D model produced has also been analyzed. For this investigation, areas where the walls and floor surfaces were flat in the field were selected, and the plane accuracy of these areas was analyzed. The plain accuracy values of the specified surfaces were determined to be below ±1cm.
Read full abstract