Abstract3D scene synthesis using natural language instructions has become a popular direction in computer graphics, with significant progress made by data‐driven generative models recently. However, previous methods have mainly focused on one‐time scene generation, lacking the interactive capability to generate, update, or correct scenes according to user instructions. To overcome this limitation, this paper focuses on text‐guided interactive scene synthesis. First, we introduce the SceneMod dataset, which comprises 168k paired scenes with textual descriptions of the modifications. To support the interactive scene synthesis task, we propose a two‐stage diffusion generative model that integrates scene‐prior guidance into the denoising process to explicitly enforce physical constraints and foster more realistic scenes. Experimental results demonstrate that our approach outperforms baseline methods in text‐guided scene synthesis tasks. Our system expands the scope of data‐driven scene synthesis tasks and provides a novel, more flexible tool for users and designers in 3D scene generation. Code and dataset are available at https://github.com/bshfang/SceneMod.
Read full abstract- All Solutions
Editage
One platform for all researcher needs
Paperpal
AI-powered academic writing assistant
R Discovery
Your #1 AI companion for literature search
Mind the Graph
AI tool for graphics, illustrations, and artwork
Unlock unlimited use of all AI tools with the Editage Plus membership.
Explore Editage Plus - Support
Overview
5817 Articles
Published in last 50 years
Related Topics
Articles published on Baseline Methods
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5777 Search results
Sort by Recency