Abstract

This article proposes a scheme for automatic extraction of text from scene images. We proceed by applying a model based segmentation procedure to the LCH color scene image. The segmentation separates out certain homogenous connected components (CCs) from the image. We next inspect these CCs in order to identify possible text components. Here, we define a number of features that distinguish between text and nontext components. Further, during learning, we consider the distribution of these features independently and approximate them using parametric distribution families. Here, we apply maximum likelihood to obtain the best fitted distribution. The joint distribution of all the features defines a class (text or nontext) distribution. Consequently, during testing, the CC belongs to the class that produces the highest class distribution probability. Our experiments are on the database of ICDAR 2003 Robust Reading Competition. We obtain satisfactory performance in distinguishing between text and non-text.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.