Semantic segmentation is a challenging task in computer vision, which requires both context information and rich spatial detail. To this end, most methods introduce low-level features for spatial detail. However, low-level features lack global information. Too much low-level features will disturb the segmentation result. In this paper, we extract low-level features guided by abstract semantic features to improve segmentation results. Specifically, we propose a Pixel-wise Attention Module (PAM) to select low-level features adaptively and a Dual Channel-wise Attention Fusion Module (DCAFM) to fuse the context information further. These two modules use the attention mechanism from a more macro perspective, which is not limited to the inter-layer feature adjustments. There are not complicated and redundant processing modules in our architecture. By using features efficiently, the complexity of the network was significantly reduced. We evaluate our approach on Cityscapes, PASCAL VOC 2012, and PASCAL Context datasets, and we achieve 82.3% Mean IoU on PASCAL VOC 2012 test dataset without pre-training on the MS-COCO dataset.