Abstract

We present an approach to multimodal semantic segmentation based on both color and depth information. Our goal is to build a semantic map containing high-level information, namely objects and background categories (carpet, parquet, walls …). This approach was developed for the Panoramic and Active Camera for Object Mapping (PACOM)11The PACOM project is supported by DGA in the frame of the “CAROTTE” competition and funded by ANR under the subvention 2009 CORD 102. CAROTTE is organized by the French research funding agency (ANR) and the French armament procurement agency (DGA). Website: <http://www.defi-carotte.fr> project in order to participate in a French exploration and mapping contest called CAROTTE. Our method is based on a structured output prediction strategy to detect the various elements of the environment, using both color and depth images from the Kinect camera. The image is first over-segmented into small homogeneous regions named “superpixels” to be classified and characterized using a bag of features representation. For each superpixel, texture and color descriptors are computed from the color image and 3D descriptors are computed from the associated depth image. A Markov Random Field (MRF) model then fuses texture, color, depth and neighboring information to associate a label to each superpixel extracted from the image. We present an evaluation of different segmentation algorithms for the semantic labeling task and the interest of integrating depth information in the superpixel computation task.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.