Abstract

In recent years, the study of a single image super-resolution (SISR) is crucial to improving image resolution and using hardware technology to improve image resolution. SISR is widely used in satellite remote sensing, video surveillance, and medical image processing because it mainly relies on deep learning algorithms to realize the conversion from low-resolution (LR) images to high-resolution images. It has the advantages of low cost, simple operation, and high efficiency. This paper proposes an image super-resolution method based on a generative adversarial network named text localization generative adversarial nets (TLGAN) model. The method is improved based on super-resolution generative adversarial networks (SRGAN), and the batch normalization layer is removed, which significantly reduces the computational burden of the model. In TLGAN model, we used the transfer learning method to pre-trained the model on the large dataset ImageNet, and then apply the pre-trained model to the cartoon image data set animes to achieve image super-resolution. Experimental results report that the proposed method has the advantages of fast running speed and excellent visual perception of super-resolution images compared with bicubic interpolation and SRGAN method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call