Automatic extraction of flame area plays an important role in forest fire detection, which can accurately understand the spatial distribution and development trend of forest fire, so as to effectively realize the protection of forest resources. However, due to the instability and spread of fires, and the complexity of the background, accurate early fire detection is extremely challenging. At the same time, the image pixel proportion of the flame area in early stage is much smaller than that in the background, which causes a serious class imbalance problem. With the fast development of deep learning, some achievements have been made in flame extraction, but there are still some deficiencies in the existing networks, such as limited feature representation, poor feature capturing ability on micro objects, insufficiency processing of local features, etc. This paper proposes an attention-based dual-encoding segmentation network, abbreviated as ADE-Net, for pixelwise early fire detection. To realize strong feature representation, a dual-encoding path, consisting of semantic units and spatial units, is introduced to extract richer features, and an attention fusion module (AFM) is introduced to fully integrate spatial and semantic information and achieve effective feature aggregation. In addition, faced with the class imbalance problem, a multi-attention fusion (MAF) module is introduced to obtain more discriminating features to make the segmentation network to focus on the key pixel areas. Furthermore, a feature enhancement module, named attention-guided enhancement (AGE) module, is proposed to enrich the feature representation of local feature maps. Finally, to realize better multi-scale global feature extraction and fusion, a global context fusion (GCF) module is proposed into the bottleneck layer for multi-scale feature enhancement. Experimental results show that the proposed ADE-Net has a good early fire detection ability from remote sensing images, and it has obtained a competitive advantage compared with advanced segmentation models.