Abstract

In this paper, we study the problem of object detection and segmentation in the cluttered indoor scenes based on RGB-D data. The main issues about object detection and segmentation in the indoor scenes are coming from serious obstruction, inconspicuous classes, and confusion categories. To solve these problems, we propose a multimodal fusion deep convolutional neural network (MFDCNN) framework for object detection and segmentation, which can boost the performance effectively at two levels whilst keeping the framework end-to-end training. Towards the object detection, we adopt a multimodal region proposal network to solve the problem of object-level detection, towards the semantic segmentation, we utilize a multimodal fully convolutional network to provide the class labels to which each pixel belongs. We focus on learning object detection and segmentation simultaneous, we propose a novel loss function to combine these two kind networks together. Under this framework, we focus on cluttered indoor scenes with challenging settings and evaluate the performance of our MFDCNN on the NYU-Depth V2 dataset. Our MFDCNN achieves state-of-the-art performance on the object detection task and earns the comparable state-of-the-art performance on the task of semantic segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call