Abstract
The main objective of translation is to translate words' meanings from one language to another; in contrast, transliteration does not translate any contextual meanings between languages. Transliteration, as opposed to translation, just considers the individual letters that make up each word. In this paper an Integrated deep neural network transliteration and translation model (NNTT) based autoencoder model is developed. The model is segmented into transliteration model and translation model; the transliteration involves the process of converting text from one script to another evaluated on the Dakshina dataset wherein Hindi typically uses a sequence-to-sequence model with an attention mechanism, the translation model is trained to translate text from one language to another. Translation models regularly use a sequence-to-sequence model performed on the WAT (Workshop on Asian Translation) 2021 dataset with an attention mechanism, similar to the one used in the transliteration model for Hindi. The proposed NNTT model merges the in-domain and out-domain frameworks to develop a training framework so that the information is transferred between the domains. The results evaluated show that the proposed model works effectively in comparison with the existing system for the Hindi language.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IAES International Journal of Artificial Intelligence (IJ-AI)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.