Handwriting recognition encompasses the conversion of hand-written text images into digital text, where input images yield predicted textual output. Optical Character Recognition (OCR) technology has conventionally fulfilled this role. With surging mobile phone usage, leveraging text detection via mobile cameras gains significance in fields like medical script processing and exam evaluation. To enhance image quality, noise reduction techniques like binarization and thresholding are applied. Image processing entails letter segmentation and extraction. In this paper, we propose a neural network classifier model amalgamating Convolution Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. Leveraging an existing feature dataset, the model is trained. Input images are evaluated through the neural network, yielding recognized words in a text document format. Ultimately, the predicted output is derived via Connectionist Temporal Classification (CTC)- based loss computation. Keywords: Handwritten Recognition, Deep Learning Techniques, Optical Character Recognition.
Read full abstract