Currency classification and Image to Text OCR are essential technologies that find applications in various domains, including finance, retail, and automation. The approach outlined in this paper has the potential to detect currencies from multiple countries. However, for practical implementation purposes, the focus is solely on Indian paper currencies. This system offers the advantage of convenient currency checking at any time and location, leveraging Convolutional Neural Networks (CNN) for effective implementation. Extensive testing was conducted on each denomination of Indian currency, resulting in an impressive 95% accuracy rate. To further refine accuracy, a classification model was developed, incorporating all pertinent factors discussed in the paper. Notably, the unique features of paper currency play a pivotal role in the recognition process. By emphasizing these elements and harnessing CNN technology, the proposed system demonstrates significant promise in accurately detecting and validating Indian paper currencies. It stands poised to serve various applications effectively. On the other hand, Image to Text OCR focuses on extracting text from images, enabling the conversion of non- editable documents into searchable and editable formats. Both technologies contribute to automation and efficiency in handling diverse visual information. Optical Character Recognition (OCR) is a technologydesigned to recognize and interpret both printed and handwritten characters by scanning text images. This process involves segmenting the text image into regions, isolating individual lines, and identifying each character along with its spacing. After isolating individual characters from the text image, the system conducts an analysis of their texture and topological attributes. This involves examining corner points, unique characteristics of various regions within the characters, and calculating the ratio of character area to convex area Prior to initiating recognition, the system creates templates that store the distinctive features of uppercase and lowercase letters, digits, and symbols. These templates serve as reference models for comparison during the recognition phase. During recognition, the system matches the extracted character's texture and topological Features with those stored in the templates to determine the exact character. This matching process involves comparing features of the extracted character with templates of all characters, measuring similarity, and ultimately recognizing the character accurately.
Read full abstract