Abstract
According to statistics, there are 422 million speakers of the Arabic language. Islam is the second-largest religion in the world, and its followers constitute approximately 25% of the world’s population. Since the Holy Quran is in Arabic, nearly all Muslims understand the Arabic language per some analytical information. Many countries have Arabic as their native and official language as well. In recent years, the number of internet users speaking the Arabic language has been increased, but there is very little work on it due to some complications. It is challenging to build a robust recognition system (RS) for cursive nature languages such as Arabic. These challenges become more complex if there are variations in text size, fonts, colors, orientation, lighting conditions, noise within a dataset, etc. To deal with them, deep learning models show noticeable results on data modeling and can handle large datasets. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) can select good features and follow the sequential data learning technique. These two neural networks offer impressive results in many research areas such as text recognition, voice recognition, several tasks of Natural Language Processing (NLP), and others. This paper presents a CNN-RNN model with an attention mechanism for Arabic image text recognition. The model takes an input image and generates feature sequences through a CNN. These sequences are transferred to a bidirectional RNN to obtain feature sequences in order. The bidirectional RNN can miss some preprocessing of text segmentation. Therefore, a bidirectional RNN with an attention mechanism is used to generate output, enabling the model to select relevant information from the feature sequences. An attention mechanism implements end-to-end training through a standard backpropagation algorithm.
Highlights
The research community has been working for many years to recognize printed and written texts [1]
This paper presents an attention-based Convolutional neural networks (CNNs)-recurrent neural networks (RNNs) model for Arabic image text recognition
This paper presents a state-of-the-art deep learning attention mechanism over an RNN which gives successful results over Arabic text datasets such as Alif and Activ
Summary
The research community has been working for many years to recognize printed and written texts [1]. The process of text recognition from natural images can be slowed down with these backgrounds [3]. This paper’s primary focus is on the recognition of Arabic texts from natural scenes. For Arabic video text recognition, this paper us sequence to sequence model. Tishiaspwpirllobaecmhoirnecdluiffidceultthief wfaechtatvhealtonagll the kn seoqfutehnecsees.feTahteurerfeorsee, qthueerne csehsouisldsbtoeraedmeinchaanfiisxmedth-laetndgetahlsvdeycntaomr.icDalulyriwnigthththeesbeackprop the model has to visit all the information in a fixed vector This will be more diffi have long sequences. The attention mechanism dynamically picks the most relevant information from the feature sequence and feeds it to the context vector; for this, it soft searches where the vital information is present in the input sequences [26]. This paper presents an attention-based CNN-RNN model for Arabic image text recognition
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have