Abstract

Latest advancements in vision technology offer an evident impact on multi-object recognition and scene understanding. Such scene-understanding task is a demanding part of several technologies, like augmented reality-based scene integration, robotic navigation, autonomous driving, and tourist guide. Incorporating visual information in contextually unified segments, convolution neural networks-based approaches will significantly mitigate the clutter, which is usual in classical frameworks during scene understanding. In this paper, we propose a convolutional neural network (CNN) based segmentation method for the recognition of multiple objects in an image. Initially, after acquisition and preprocessing, the image is segmented by using CNN. Then, CNN features are extracted from these segmented objects, and discrete cosine transform (DCT) and discrete wavelet transform (DWT) features are computed. After the extraction of CNN features and computation of classical machine learning features, fusion is performed using a fusion technique. Then, to select the minimal set of features, genetic algorithm-based feature selection is used. In order to recognize and understand the multi-objects in the scene, a neuro-fuzzy approach is applied. Once objects in the scene are recognized, the relationship between these objects is examined by employing the object-to-object relation approach. Finally, a decision tree is incorporated to assign the relevant labels to the scenes based on recognized objects in the image. The experimental results over complex scene datasets including SUN Red Green Blue-Depth (RGB-D) and Cityscapes’ demonstrated a remarkable performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call