Abstract

Neural Machine Translation (NMT) models generally perform translation using a fixed-size lexical vocabulary, which is an important bottleneck on their generalization capability and overall translation quality. The standard approach to overcome this limitation is to segment words into subword units, typically using some external tools with arbitrary heuristics, resulting in vocabulary units not optimized for the translation task. Recent studies have shown that the same approach can be extended to perform NMT directly at the level of characters, which can deliver translation accuracy on-par with subword-based models, on the other hand, this requires relatively deeper networks. In this paper, we propose a more computationally-efficient solution for character-level NMT which implements a hierarchical decoding architecture where translations are subsequently generated at the level of words and characters. We evaluate different methods for open-vocabulary NMT in the machine translation task from English into five languages with distinct morphological typology, and show that the hierarchical decoding model can reach higher translation accuracy than the subword-level NMT model using significantly fewer parameters, while demonstrating better capacity in learning longer-distance contextual and grammatical dependencies than the standard character-level NMT model.

Highlights

  • Neural Machine Translation (NMT) models are typically trained using a fixed-size lexical vocabulary

  • They are typically deployed as a pre-processing step before training the NMT model, the predicted set of subword units are essentially not optimized for the translation task

  • We present the results of an extensive evaluation comparing conventional approaches for open-vocabulary NMT in the machine translation task from English into five morphologically-rich languages, where each language belongs to a different language family and has a distinct morphological typology

Read more

Summary

Introduction

Neural Machine Translation (NMT) models are typically trained using a fixed-size lexical vocabulary. The prominent approach to overcome this limitation is to segment words into subword units (Sennrich et al, 2016) and perform translation based on a vocabulary composed of these units. Subword segmentation methods generally rely on statistical heuristics that lack any linguistic notion. They are typically deployed as a pre-processing step before training the NMT model, the predicted set of subword units are essentially not optimized for the translation task. (Cherry et al, 2018) extended the approach of NMT based on subword units to implement the translation model directly at the level of characters, which could reach comparable performance to the subword-based model, this would require much larger networks which may be more difficult to train. The increased sequence lengths due to processing sentences as sequences of characters augments the computational cost, and a possible limitation, since sequence models typically have limited capacity in remembering longdistance context

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call