Handwritten fonts possess unique expressive qualities; however, their clarity often suffers because of inconsistent handwriting. This study introduces FontFusionGAN (FFGAN), a novel method that enhances handwritten fonts by mixing them with printed fonts. The proposed approach leverages a generative adversarial network (GAN) to synthesize fonts that combine the desirable features of both handwritten and printed font styles. Training a GAN on a comprehensive dataset of handwritten and printed fonts enables it to produce legible and visually appealing font samples. The methodology was applied to a dataset of handwriting fonts, showing substantial enhancements in the legibility of the original fonts, while retaining their unique aesthetic essence. Unlike the original GAN setting where a single noise vector is used to generate a sample image, we randomly selected two noise vectors, z1 and z2, from a Gaussian distribution to train the generator. Simultaneously, we input a real image into the fusion encoder for exact reconstruction. This technique ensured the learning of style mixing during training. During inference, we provided the encoder with two font images, one handwritten and the other printed font, to obtain their respective latent vectors. Subsequently, the latent vector of the handwritten font image was injected into the first five layers of the generator, whereas the latent vector of the printed font image was injected into the last two layers to obtain a refined handwritten font image. The proposed method has the potential to improve the readability of handwritten fonts, offering benefits across diverse applications, such as document composition, letter writing, and assisting individuals with reading and writing difficulties.