Abstract

The mainstream model in neural machine translation, the Transformer, relies heavily on self-attention mechanisms for translation operations. This approach has significantly improved both accuracy and speed. However, there are still some challenges. For instance, it lacks the incorporation of linguistic knowledge and the ability to leverage syntactic structure information in natural language for translation, leading to issues such as mistranslation and omission. Addressing the limitations of the Transformer's autoregressive decoding, which decodes from left to right without fully utilizing context information and is prone to exposure bias, this paper proposes a syntax-aware bidirectional decoding neural machine translation model. By employing both forward and backward decoders, the generated decoding results can encompass contextual information. Additionally, the model integrates dependency syntax to generate target language sentences with syntactic guidance. Finally, an optimization strategy involving the Teacher Forcing mechanism is introduced to balance the discrepancies between the Teacher Forcing training phase and the autoregressive testing phase, thus alleviating exposure bias issues.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.