Abstract
The automatic segmentation of buildings from satellite images is extremely challenging due to the complex shapes of buildings. Deep convolutional neural networks (DCNNs) have recently enabled accurate pixel-level labeling tasks, which ensures precise boundaries. However, the large number of pixel-level labels required for DCNN-based semantic segmentation of buildings represents a considerable obstacle to the process. We propose a framework that relies on deep seeds and optimal segmentation to extract buildings from very high resolution imagery to solve the aforementioned issue. Input images are cropped rather than resized to a low-resolution image and passed directly to DCNN for segmentation. An image resize transformation will lead to a significant loss of image detail, especially for images with high resolution. For both image patches and resized images, a classification network is utilized to locate deep seeds within building and nonbuilding classes. Then, a boundary map can be predicted by passing the resized image through a convolutional oriented boundary network. A hierarchical segmentation tree is built to determine the optimal segmentation by seeking the optimal tree cut. The final segmentation is achieved by developing a graphic model representing a set of nodes that distribute information from deep seeds to unmarked regions. According to the experiments conducted on the ISPRS two-dimensional semantic labeling contest (Potsdam) and the WHU building datasets, the proposed framework is able to significantly enhance the building segmentation accuracy. We have achieved such improvements through a cost-effective computing method without training a segmentation network.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.