Abstract
Rapid progress on text image recognition has been achieved with the development of deep-learning techniques. However, it is still a great challenge to achieve a comprehensive license plate recognition in the real scenes, since there are no publicly available large diverse datasets for the training of deep learning models. This paper aims at synthesising of license plate images with generative adversarial networks (GAN), refraining from collecting a vast amount of labelled data. The authors thus propose a novel PixTextGAN that leverages a controllable architecture that generates specific character structures for different text regions to generate synthetic license plate images with reasonable text details. Specifically, a comprehensive structure-aware loss function is presented to preserve the key characteristic of each character region and thus to achieve appearance adaption for better recognition. Qualitative and quantitative experiments demonstrate the superiority of authors’ proposed method in text image synthetisation over state-of-the-art GANs. Further experimental results of license plate recognition on ReId and CCPD dataset demonstrate that using the synthesised images by PixTextGAN can greatly improve the recognition accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.