Abstract

ABSTRACT Multiple view object recognition is challenged by the impact of various view-angles on intra-class relationships. Visually impaired individuals can benefit from accurate navigation services with a navigation system that enables them to avoid obstacles to their destination. An indoor object detection framework called RSIGConv, based on an integrated Region proposal and Spatial Information Guided Convolution network, is proposed in this paper for visually impaired people. To obtain mutual complementarity (MC) features, the RGB and HHA feature maps are fused using the Information Translation Module (ITM). The hyper-parameters are optimized using the Bayesian optimization Algorithm (BOA) to reduce train error and the gap between train error and validation error. The proposed object detection framework is evaluated using the publicly available SUN RGB-D dataset and compared with previous prediction models. The simulation outputs demonstrate that the model overtakes existing approaches, achieving an accuracy of 97.77%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call