Abstract

Due to the increasing applications of surveillance and monitoring such as person re-identification, vehicle re-identification and sports events tracking, the necessity of text detection and end-to-end recognition is also growing. Although the past deep learning-based models have addressed several challenges such as arbitrary-shaped text, multiple scripts, and variations in the geometric structure of characters, the scope of the models is limited to a single view. This paper presents an end-to-end model for text recognition through refining the multi-views of the same scene, which is called E2EMVSTR (End-to-End Model for Multi-View Scene Text Recognition). Considering the common characteristics shared in multi-view texts, we propose a cycle consistency pairwise similarity-based deep learning model to find texts more efficiently in three input views. Further, the extracted texts are supplied to a Siamese network and semi-supervised attention embedding combinational network for obtaining recognition results. The proposed model combines natural language processing and genetic algorithm models to restore missing character information and correct wrong recognition results. In experiments on our multi-view dataset and several benchmark datasets, the proposed method is proven effective compared to the state-of-the-art methods. The dataset and codes will be made available to the public upon acceptance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call