Abstract

Scene text style transfer without a language barrier is an open challenge for the video and scene text recognition community because this plays a vital role in poster, web design, augmenting character images, and editing characters to improve scene text recognition performance and usability. This work presents a new model, called Script Independent Scene Text Style Transfer Network (SISTSTNet), for extracting scene characters and transferring text style simultaneously. The SISTSTNet performs mapping in language-independent feature space for transferring style. It is designed based on a Style Parameter Network and Target Encoder Network through lightweight MobileNetv3 convolutional and residual blocks to capture the style and shape to generate target characters. Similarly, a generative model is explored through the Visual Geometry Group (VGG) network for character replacement. The SISTSTNet is flexible and works on different languages and arbitrary examples in a neat and unified fashion. The experimental results on images in various languages, namely, English, Chinese, Hindi, Russian, Japanese, Arabic, Greek, and Bengali and cross-language validation demonstrate the effectiveness of the proposed method. The performance of the method is superior compared to the state-of-the-art methods in terms of quality measures, language independence, shape-preserving, and efficiency. The code and dataset will be released to the public to support reproducibility.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call