ABSTRACTAlthough significant advancements have been made in the quality of machine translation by large‐scale language models, their high computational costs and resource consumption have hindered their widespread adoption in practical applications. So this research introduces an English corpus‐based machine translation algorithm that leverages knowledge distillation from large language model, with the goal of enhancing translation quality and reducing the computational demands of the model. Initially, we conducted a thorough analysis of the English corpus to identify prevalent language patterns and structures. Following this, we developed a knowledge distillation approach that transfers the translation expertise of a large teacher model to a smaller student model, thereby achieving increased translation accuracy and efficiency. We designed a dynamic temperature hyperparameter distillation strategy that effectively enhances the precision of translations. In the experimental phase, we utilized several standard English corpora to train and assess our algorithm. The findings indicate that, compared to current machine translation systems, our method significantly reduces the need for computational resources while preserving translation quality.
Read full abstract