Abstract

Urban development alters landscapes, frequently degrading environmental services and quality of life. High-resolution remote sensing images provide a chance to detect subtle changes in land cover and can capture the features of a ground object. However, traditional approaches usually experience difficulties when processing large and quickly expanding datasets, low levels of automation, limited computational efficiency, and inconsistent identification accuracies and standards brought on by inconsistent operators. Conducting change detection in a more accurate, automated, and standardized manner has become crucial and increasingly difficult due to the quick collection of remote sensing data. Therefore, in this paper, V-Net and Bilateral Attention Network (V-BANet) based deep learning is implemented to segment the landscapes and extract the features from the images. Initially, the bi-temporal images are segmented using V-Net to independently identify the objects in each image. Then spatial and channel attention blocks are employed in Bilateral Attention Network to learn more discriminative features from the images. Finally, the features' relationships are discovered by contrasting the original feature map in one image with the updated feature map in the other. Objective and subjective experiments are performed on a public bi-temporal high-resolution ONERA Satellite Change Detection (OSCD) dataset and the LEVIR-CD dataset. Moreover, the proposed approach reached an accuracy of 99.29% and IoU of 98.31% with the OSCD Dataset and 99.42% accuracy and 98.83% IoU with the LEVIR-CD Dataset. The experimental outcomes with each specified dataset demonstrated that the suggested methodology outperformed several state-of-the-art techniques and produced superior results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.