Abstract
Automatic building segmentation from satellite images is an important task for various applications such as urban mapping, disaster management and regional planning. With the broader availability of very high-resolution satellite images, deep learning-based techniques have been broadly used for remote sensing image-related tasks. In this study, we generated a new building dataset, the Istanbul dataset, for the building segmentation task. 150 Pléiades image tiles of 1500 × 1500 pixels covering an area of 85 km2 area of Istanbul city were used and approximately 40,000 buildings were labelled, representing different building structures and spatial distribution. We extensively investigated the ideal architecture, encoder and hyperparameter settings for building segmentation tasks using the new Istanbul dataset. More than 60 experiments were conducted by applying state-of-the-art architectures such as U-Net, Unet++, DeepLabv3+, FPN and PSPNet with different pre-trained encoders and hyperparameters. Our experiments showed that Unet++ architecture using SE-ResNeXt101 encoder pre-trained with ImageNet provides the best results with 93.8% IoU on the Istanbul dataset. In order to prove our solution's generalizability, the ideal network has also been trained separately on Inria and Massachusetts building segmentation datasets. The networks have produced IoU values of 75.39% and 92.53% on the Inria and Massachusetts datasets, respectively. The results indicate that our ideal network solution settings outperform other methods in terms of building segmentation even without any specific architectural modification. The weights files and inference notebook is available on: https://github.com/TolgaBkm/Istanbul_Dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.