Abstract

The calligraphic works of particular calligraphers often contain only a limited number of characters, rather than the full set of Chinese characters required for typography, which does not meet practical needs. There is therefore a need to develop a complete set of calligraphic characters for calligraphers. Most of the recently popular methods for generating calligraphic characters are based on deep learning, using an end-to-end approach to generate the target image. Deep learning-based methods usually suffer from unsuccessful conversion of stroke structures when the printed font differs significantly from the target font structure. In this paper, we propose an involution-based calligraphic character generation model, which can realize the conversion from printed fonts to target calligraphic fonts. We improve the Pix2Pix model by using a new neural operator, involution, which focuses more on spatial feature processing and can better handle the relationship between strokes than the models using only convolution, so that the generated calligraphic characters have an accurate stroke structure. A self-attentive module and a residual block are also added to increase the depth of the network to improve the feature processing capability of the model. We evaluated our method and some baseline methods using the same dataset, and the experimental results demonstrate that our model is superior in both visual and quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call