Abstract

In existing statistical machine translation models, especially syntax-based models, there has always been a trade-off between the amount of information a translation unit preserves and its ability to generalize when translating new sentences. Neural networks have been successfully employed in reordering and end-to-end machine translation problems. In this paper, we propose a novel syntactic translation rule encoder-decoder based on neural networks. It is a dependency edge transfer rule encoder-decoder (DETED) that leverages the source side of a transfer rule and local context as input, and outputs the target side of that in order to learn the source-to-target matching of the dependency edge transfer rules. It shares not only the benefit of dependency edge, which is the most relaxed syntactic constraint, in order to ensure its generalization ability, but also the local context as additional information in order to improve its matching ability. The structure of the encoder-decoder is quite concise. With the source side of a translation rule as the input, it decodes the corresponding target side of the translation rule, and makes it clear the positional relation of the dependency edge. The generator is used to re-score the transfer rules when decoding. Experiments on three NIST test sets are presented. The results indicate a significant performance improvement with an average BLEU score of 1.39 above the baseline value.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.