Abstract

Retrieving desired information from databases containing video, natural scene, and license plate images through keyword spotting is a big challenge to expert systems due to different complexities that occur because of background and foreground variations of texts in real-time environments. To reduce background complexity of input images, we introduce a new model based on fractional means that considers neighboring information of pixels to widen the gap between text and background. To do so, the process obtains text candidates with the help of k-means clustering. The proposed approach explores the combination of Radon and Fourier coefficients to define context features based on regular patterns given by coefficient distributions for foreground and background of text candidates. This process eliminates non-text candidates regardless of different font types and sizes, colors, orientations and scripts, and results in representatives of texts. The proposed approach then exploits the fact that text pixels share almost the same values to restore missing text components using Canny edge image by proposing a new idea of minimum cost path based ring growing, and then outputs keywords. Furthermore, the proposed approach extracts the same above-mentioned features locally and globally for spotting words from images. Experimental results on different benchmark databases, namely, ICDAR 2013, ICDAR 2015, YVT, NUS video data, ICDAR 2013, ICDAR 2015, SVT, MSRA, UCSC, Medialab and Uninusubria license plate data show that the proposed method is effective and useful compared to the existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.