Abstract

Compared with printed Chinese characters, handwritten Chinese characters are affected by writing coherence and anthropic factors, resulting in differences in stroke deformation, displacement, tilt, length, and thickness. This consequently affects the recognition of handwritten Chinese characters. A method based on generative adversarial networks (GANs) of Chinese character handwriting-to-printing font conversion is proposed herein, where Chinese handwriting-to-printing font conversion is represented as a font style conversion and regarded as the optimal state of normalization of handwritten Chinese characters. First, an encoder–decoder is integrated into a generative adversarial network, and a symmetric network extracts handwritten font multi-scale information. Subsequently, U-Net with skip connection is used to extract deep features and reduce the large amount of low-level information shared between the input and output. Furthermore, the skip connection reduces information loss during down-sampling. Finally, the integration loss measures the differences in the character structure and font style between the generated Chinese character image and the target font image by combining font style, encoding consistency, and [Formula: see text] losses. An experiment to convert Chinese character handwriting to multiple printing fonts is performed on the CASIA and CASIA-HWDB1.1 datasets, in which image pixel difference and character recognition accuracy are used as evaluation metrics. The obtained results show a more normalized visual writing style and contribute to recognition improvement, thereby verifying the effectiveness of this method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call