Neural networks have been shown to have deficiencies in the ability of compositional generalization while existing work has generally targeted semantic parsing tasks. In this paper, we study the challenge of machine translation and introduce a dedicated Chinese→English dataset, CoGniTion (Compositional Generalization Machine Translation). The training, validation, and test sets have 216k, 10k, and 10k sentence pairs, respectively, which are simple but contain a variety of phrases. More importantly, we construct 2160 novel compounds(phrases) unseen in the training data and each compound is embedded into 5 distinct contexts, forming the CG-test set with 10,800 samples. We evaluate the performance of Transformer models based on conventional metrics like BLEU and the proposed compound translation error rates, demonstrating their difficulty in handling compositional generalization. For the Transformer trained from scratch, 27.42% of the test sentences are translated incorrectly, and 62.88% of the compounds fail to be translated consistently. The pretrained language models (PLM) including the large language models (LLM) still exhibit approximately 30% inconsistency. Moreover, we conduct an empirical analysis of various factors that impact compound translation performance, and show that the performance of PLMs may be overestimated due to unavoidable composition exposure in pertaining data.