In statistical machine translation, translation prediction considers not only the aligned source word itself but also its source contextual information. Learning context representation is a promising method for improving translation results, particularly through neural networks. Most of the existing methods process context words sequentially and neglect source long-distance dependencies. In this paper, we propose a novel neural approach to source dependence-based context representation for translation prediction. The proposed model is capable of not only encoding source long-distance dependencies but also capturing functional similarities to better predict translations (i.e., word form translations and ambiguous word translations). To verify our method, the proposed mode is incorporated into phrase-based and hierarchical phrase-based translation models, respectively. Experiments on large-scale Chinese-to-English and English-to-German translation tasks show that the proposed approach achieves significant improvement over the baseline systems and outperforms several existing context-enhanced methods.