<p>Neural Machine Translation (NMT) has gained increasing attention in recent years due to its promising performance over conventional approaches such as Statistical Machine Translation. Nevertheless, when applied to languages with different structures, such as the pair (English, Arabic) that interests us in this work, its efficiency is degraded.<br />Alternatively, the use of unsupervised pre-training of large neural models has recently made a significant leap forward in the field of Natural Language Processing (NLP). By warm starting from published checkpoints, NLP practitioners have been able to push the state-of-the-art boundaries on multiple benchmarks while simultaneously saving considerable computational time. The emphasis so far has been mainly on natural language understanding problems. In this paper, we show the efficacy of pre-trained checkpoints for Arabic Machine Translation. We have evolved a transformer-based sequence-to-sequence model which is compatible with pre-trained checkpoints publicly available for Arabic Bidirectional Encoder Representations from Transformers (AraBERT) and Arabic Generative Pre-trained Transformer (AraGPT) and conducted a thorough empirical study on the usefulness of initializing our Arabic MT model, both encoder and decoder, with these checkpoints. Our models yield new state-of-the-art results in Arabic MT.</p>