Abstract

We propose an online RGB-D based scene understanding method for indoor scenes running in real time on mobile devices. First, we incrementally reconstruct the scene via simultaneous localization and mapping and compute a three-dimensional (3-D) geometric segmentation by fusing segments obtained from each input depth image in a global 3-D model. We combine this geometric segmentation with semantic annotations to obtain a semantic segmentation in form of a semantic map. To accomplish efficient semantic segmentation, we encode the segments in the global model with a fast incremental 3-D descriptor and use a random forest to determine its semantic label. The predictions from successive frames are then fused to obtain a confident semantic class across time. As a result, the overall method achieves an accuracy that gets close to the most state-of-the-art 3-D scene understanding methods while being much more efficient, enabling real-time execution on low-power embedded systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.