Abstract
In this paper, we present a novel end-to-end variational generative model that utilizes graph-based latent representations for indoor scene synthesis at one time. In contrast to prior research, our method deviates from the practice of gradually introducing and arranging furniture in an empty room using autoregression. Instead, it focuses on acquiring a comprehensive implicit representation of the room’s original architectural structure and the placement of furniture. We initially transform the 3D room scene into a dense scene graph, where nodes correspond to the objects present in the room while edges reflect the spatial location links and functional correlations between objects. Then, a neural network is trained to acquire the graph-based latent representation of the room scene through iterative message passing, ultimately resulting in the acquisition of the data distribution on the latent space of the room layout. Given the architectural structure of an empty room as a prerequisite for scene synthesis, the generative model has the ability to sample from the prior distribution of the room’s latent representation. This allows the model to then decode and generate a variety of room layouts. We evaluate our method with the state-of-the-art 3D indoor scene dataset and generation methods. The experimental results demonstrate that our method achieves more rational and diverse outcomes in the context of generating scenes under specific conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.