Abstract
At present, Neural Machine Translation (NMT) is an innovative and latest approach for machine translation, which is way better than statistical machine translation. It has drawn so much attention to it, that most of researchers are exploring new states of art approaches to get better translations. As the area of deep learning and transfer learning are also being implemented a lot, we are trying to fit a pre-trained model's unique way of tokenization into a NMT architecture so that the pre-trained weights gives better translation. In this work we study how BERT pre-trained models might be exploited for supervised NMT. We compare various ways to integrate pre-trained BERT model with NMT model and study the impact of the monolingual data that is used to train BERT which we are proposing to use in the translation of parallel corpus.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal For Innovative Engineering and Management Research
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.