Abstract

Recently, there has been widespread attention and significant progress in customized text-to-image synthesis based on diffusion models. However, reconstructing multiple concepts in the same scene remains highly challenging. Therefore, we propose a novel framework called TDG-Diff, which employs a two-stage diffusion guidance to achieve customized image synthesis with multiple concepts. TDG-Diff focuses on improving the sampling process of the diffusion model. Specifically, TDG-Diff subdivides the sampling process into two key stages: attribute separation and appearance refinement, introducing spatial constraints and concept representations for sampling guidance. In the attribute separation stage, TDG-Diff introduces a novel attention modulation method. This method effectively separates the attributes of different concepts based on spatial constraint information, reducing the risk of entanglement between attributes of different concepts. In the appearance refinement stage, TDG-Diff proposes a fusion sampling approach, which combines global text descriptions and concept representations to optimize and enhance the model’s ability to capture and represent fine-grained details of concepts. Extensive qualitative and quantitative results demonstrate the effectiveness of TDG-Diff in customized text-to-image synthesis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call