Abstract
Designing and generating novels fonts manually is a laborious and time-consuming process owing to the large number and complexity of characters in the majority of language systems. Recent advancements in generative adversarial networks (GANs) have significantly improved font generation. These GAN-based approaches either handle the font generation as a vanilla GAN problem (that is, by synthesizing characters from a uniform latent vector) or an image-to-image translation problem. While the former approach has no limitation in generating diverse font styles, the generated fonts contain artifacts and can operate only on low-resolution images, thus impairing their usability. The latter approach generates high-quality font images for previously observed fonts, but the quality degrades during the inference phase while designing novel fonts. Furthermore, additional fine-tuning steps are required to achieve photorealistic results, which is computationally expensive and time-consuming. To address the shortcomings of these approaches, we propose a font generation method that employs the vanilla GAN approach to generate an infinite number of font styles but focuses on the real-time generation of photo-realistic font images. Additionally, we strive to create high-resolution images that can be used in practical applications. To accomplish this, we propose a conditional font GAN (CFGAN) with a sophisticated network architecture that is designed to generate novel style-consistent diverse font character sets. We control the generated characters in the proposed network using a non-trainable fixed character vector, while the style variation sampled from a Gaussian distribution is fused at all blocks of the generator through adaptive instance normalization (AdaIN) operation. Thus, the generator architecture can simultaneously generate an infinite number of font styles with style consistency and diversity during inference. We conducted various quantitative and qualitative experiments to demonstrate the effectiveness of the proposed model in terms of both image quality and computational cost.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.