In motion estimation, illumination change is always a troublesome obstacle, which often causes severely performance reduction of optical flow computation. The essential reason is that most of estimation methods fail to formalize a unified definition in color or gradient domain for diverse environmental changes. In this paper, we propose a new solution based on deep convolutional networks to solve the key issue. Our idea is to train deep convolutional networks to represent the complex motion features under illumination change, and further predict the final optical flow fields. To this end, we construct a training dataset of multi-exposure image pairs by performing a series of non-linear adjustments in the traditional datasets of optical flow estimation. Our multi-exposure flow networks (MEFNet) model consists of three main components: low-level feature network, fusion feature network, and motion estimation network. The former two components belong to the contracting part of our model in order to extract and represent the multi-exposure motion features; the third component is the expanding part of our model in order to learn and predict the high-quality optical flow. Compared with many state-of-the-art methods, our motion estimation method can eliminate the obstacle of illumination change and yield optical flow results with competitive accuracy and time efficiency. Moreover, the good performance of our model is also demonstrated in some multi-exposure video applications, like HDR (high dynamic range) composition and flicker removal.