Abstract

Natural scene text images captured by handheld devices usually cause low resolution (LR) problems, thus making sub-sequent detection and recognition tasks more challenging. To address this problem, LR text images are generally super-resolution (SR) processed first. In this paper, we propose a novel low-resolution text image super-resolution method. This method adopts the residual-in-residual dense network (RRDN) to extract deeper high-frequency features than the residual dense network (RDN). Then, enhances the spatial and channel features with an attention mechanism. According to the characteristics of the text, we added gradient loss to adversarial learning. Experiments show that our method performs well in both qualitative and quantitative aspects of the latest public text image super-resolution dataset. Similarly, the proposed super-resolution method for text images of natural scenes also achieves the latest results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call