Abstract
We introduce an approach of semantic segmentation to detect various objects for the mobile robot system “ROSWITHA” (RObot System WITH Autonomy). Developing a semantic segmentation method is a challenging research field in machine learning and computer vision. The semantic segmentation approach is robust compared with the other traditional state-of-the-art methods for understanding the surroundings. Semantic segmentation is a method that presents the most information about the object, such as classification and localization of the object on the image level and the pixel level, thus precisely depicting the shape and position of the object in space. In this work, we experimented with verifying the effectiveness of semantic segmentation when used as an aid to improving the performance of robust indoor navigation tasks. To make the output map of semantic segmentation meaningful, and enhance the model accuracy, points cloud data were extracted from the depth camera, which fuses the data originated from RGB and depth stream to improve the speed and accuracy compared with different machine learning algorithms. We compared our modified approach with the state-of-the-art methods and compared the results when trained with the available dataset NYUv2. Moreover, the model was then trained with the customized indoor dataset 1 (three classes) and dataset 2 (seven classes) to achieve a robust classification of the objects in the dynamic environment of Frankfurt University of Applied Sciences laboratories. The model attains a global accuracy of 98.2%, with a mean intersection over union (mIoU) of 90.9% for dataset 1. For dataset 2, the model achieves a global accuracy of 95.6%, with an mIoU of 72%. Furthermore, the evaluations were performed in our indoor scenario.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.