Abstract

Text image super-resolution (SR) is one of the most hot topic research in computer vision. The traditional convolutional neural networks usually over-smooth fine structures of SR images while the popular generative adversarial network (GAN) based SR methods may produce undesirable SR results with distorted edges. To cope with these problems, this paper proposes a bidirectional-branch generative adversarial network (B2GAN) for text image SR. The proposed B2GAN consists of two parts: one is a generator, the other is a discriminator. In the generator, a forward branch is used for mapping the low-resolution (LR) image to its corresponding SR image to generate photo-realistic images, while a reverse branch is responsible for mapping the generated SR image to a low-resolution image which is viewed as its original LR image. A loss function is proposed to train a pleasing generator and guarantee a stable convergence to the original LR image. In the discriminator, The VGG16 and the adversarial loss are employed. In the experiments, we establish two text image datasets including Chinese and multi-language characters which are used to test the effectiveness of the proposed B2GAN method. Quantitative associated with qualitative experimental results validate that the proposed B2GAN method can achieve competitive performance and even outperform the state-of-the-arts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call