Abstract
Abstract Insufficient redundancy or excessive redundancy of information in parallel corpora can interfere with the semantic relevance analysis in the process of machine translation and affect the quality of machine translation. For this reason, a neural machine translation model has been designed that incorporates information entropy to determine base weights and SVM to eliminate redundant samples for scientific and technical English texts. The results of simulation experiments show that the present method (RNNSearch +IE+ SVM) improves the BLEU value by 1.06 BLEUs compared to the baseline model in the English-German translation task. SVM excels at parsing both under-redundancy and over-redundancy in binary classification experiments for similarity. The research has the potential to significantly enhance the efficiency of neural machine translation for scientific and technical English texts and achieve excellent experimental results, paving the way for new ideas and methods for neural machine translation research.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.