Abstract

Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.