Abstract

Indoor scene recognition is complex due to the commonality shared between different spaces. Still, when it comes to robotics applications, the uncertainty increases due to illumination change, motion blur, interruption due to external light sources, and cluttered environments. Most existing fusion approaches do not consider the uncertainty, and others have a high computational cost that may not suit robots with limited resources. To mitigate these issues, this paper proposes a reliable indoor scene recognition approach for mobile robots with limited resources based on robust deep convolutional neural networks (CNNs) feature extractors and neuro-fuzzy inference to consider the uncertainty of the data. All CNN feature extractors are pre-trained on the Imagenet dataset and used in the manner of transfer learning. The performance of our fusion method has been assessed on a customized MIT-67 dataset and for real-time processing on a Locobot robot. We also compare the proposed method with two standard fusion methods---Early Feature Fusion (EFF) and Weighted Average Late Fusion (WALF). The experimental results demonstrate that our method achieves competitive results with a precision of 94%, and it performs well on the Locobot robot with a speed of 3.1 frames per second.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call