The incidence of blinding eye diseases is highly correlated with changes in retinal morphology, and is clinically detected by segmenting retinal structures in fundus images. However, some existing methods have limitations in accurately segmenting thin vessels. In recent years, deep learning has made a splash in the medical image segmentation, but the lack of edge information representation due to repetitive convolution and pooling, limits the final segmentation accuracy. To this end, this paper proposes a pixel-level retinal vessel segmentation network with multiple-dimension attention and adaptive feature fusion. Here, a multiple dimension attention enhancement (MDAE) block is proposed to acquire more local edge information. Meanwhile, a deep guidance fusion (DGF) block and a cross-pooling semantic enhancement (CPSE) block are proposed simultaneously to acquire more global contexts. Further, the predictions of different decoding stages are learned and aggregated by an adaptive weight learner (AWL) unit to obtain the best weights for effective feature fusion. The experimental results on three public fundus image datasets show that proposed network could effectively enhance the segmentation performance on retinal blood vessels. In particular, the proposed method achieves AUC of 98.30%, 98.75%, and 98.71% on the DRIVE, CHASE_DB1, and STARE datasets, respectively, while the F1 score on all three datasets exceeded 83%. The source code of the proposed model is available at https://github.com/gegao310/VesselSeg-Pytorch-master.