Abstract

Due to the high flexibility and conformity to people’s usage habits, text description has been widely used in image synthesis research recently and has achieved many encouraging results. However, the text can only determine the basic content of the generated image and cannot determine the specific shape of the synthesized object, which leads to poor practicability. More importantly, the current text-to-image synthesis research cannot use new text descriptions to further modify the synthesis result. To solve these problems, this paper proposes a text-guided customizable image synthesis and manipulation method. The proposed method synthesizes the corresponding image based on the text and contour information at first. It then modifies the synthesized content based on the new text to obtain a satisfactory result. The text and contour information in the proposed method determine the specific content and object shape of the desired composite image, respectively. Aside from that, the input text, contour, and subsequent new text for content modification can be manually input, which significantly improves the artificial controllability in the image synthesis process, making the entire method superior to other methods in flexibility and practicability. Experimental results on the Caltech-UCSD Birds-200-2011 (CUB) and Microsoft Common Objects in Context (MS COCO) datasets demonstrate our proposed method’s feasibility and versatility.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.