Abstract

Road information is a kind of important geographic information. Road information extracted from remote sensing images is widely used in map, traffic, navigation, military and many other fields. However, the autonomous extraction of road information from high resolution remote sensing images has some problems such as incoherence, incompleteness and poor connectivity, therefore, a semantic segmentation model for roads in high resolution remote sensing images, called AGD-Linknet, is proposed, which integrates attention mechanisms, gated decoder block, and dilated convolution. This model mainly consists of three parts.Firstly, the stem block is used as the initial convolution layer of the model to reduce the information loss in the convolution stage; Secondly, the series-parallel combined dilated convolution and coordinate attention block into the center of the network, which enlarges the receptive field of the network and improves the feature extraction ability of spatial domain and channel domain information; Finally, in the decoder part, gated convolution is introduced in the decoder part to improve the extraction of road edge. Compared with U-Net, Linknet and D-Linknet on the DeepGlobe dataset, the proposed AGD-Linknet has improved the pixel accuracy, mean intersection over union and F1-Score index of road recognition by 1.41%-11.52%, 0.0077-0.1473, 0.0057-0.1292, and has certain effectiveness and feasibility in many scenarios in rural areas, urban, and suburbs. And can be apply to the tasks of road recognition and extraction in high-resolution remote sensing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.