Abstract

Example-based texture synthesis requires synthesizing textures that are as similar as possible to the exemplar. However, for complex texture patterns, the existing methods lead to wrong synthesis results due to insufficient feature extraction capabilities. To address this problem, this paper proposed an optimized generative adversarial network model to address the quality issues such as low resolution and insufficient detail in texture synthesis. To this end, we propose a new multi-head mutual self-attention (MHMSA) mechanism. Different from the self-attention, MHMSA is to model the mutual relationship of each position in the feature space, and clues from all feature positions can be used to generate details. Therefore, embedding the MHMSA into the generator can help to improve its ability to extract detailed features and global features. Experimental results show that the proposed model significantly improves the visual quality of texture synthesis images, and demonstrates that MHMSA outperforms self-attention in the image generation task.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call