Abstract

Land cover classification is a multiclass segmentation task to classify each pixel into a certain natural or human-made category of the earth’s surface, such as water, soil, natural vegetation, crops, and human infrastructure. Limited by hardware computational resources and memory capacity, most existing studies preprocessed original remote sensing images by downsampling or cropping them into small patches less than 512 × 512 pixels before sending them to a deep neural network. However, downsampling incurs a spatial detail loss, renders small segments hard to discriminate, and reverses the spatial resolution progress obtained by decades of efforts. Cropping images into small patches causes a loss of long-range context information, and restoring the predicted results to their original size brings extra latency. In response to the above weaknesses, we present an efficient lightweight semantic segmentation network termed MKANet. Aimed at the characteristics of top view high-resolution remote sensing imagery, MKANet utilizes sharing kernels to simultaneously and equally handle ground segments of inconsistent scales, and also employs a parallel and shallow architecture to boost inference speed and friendly support image patches more than 10× larger. To enhance boundary and small segment discrimination, we also propose a method that captures category impurity areas, exploits boundary information, and exerts an extra penalty on boundaries and small segment misjudgments. Both visual interpretations and quantitative metrics of extensive experiments demonstrate that MKANet obtains a state-of-the-art accuracy on two land-cover classification datasets and infers 2× faster than other competitive lightweight networks. All these merits highlight the potential of MKANet in practical applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.