Abstract
Accurate building segmentation plays a crucial role in a wide range of applications such as urban planning, monitoring, and mapping. Different deep learning models were employed for building segmentation. However, these models analyze images from a single view. Given the limitations of single-view building segmentation models, our research aims to enhance accuracy by proposing a novel multi-view U-Net deep model for accurate building segmentation that incorporates multiple views of the images. We employ two pre-trained convolutional neural network architectures, MobileNetV2 and ResNet50, to extract features representing two different views of our images. By fusing these features, our proposed method effectively captures complementary information, leading to enhanced segmentation accuracy. To further improve the model’s performance, we incorporate skip connections and up-convolutional layers to ensure fine-grained feature propagation. Our experimental results on a large building dataset demonstrate a significant improvement in segmentation accuracy 91% compared to state-of-the-art methods, highlighting the effectiveness of our multiview fusion approach. The experimental results enhance the benefits of creating different views by adopting the novel concept proposed in this paper. This research has the potential to redefine the landscape of building segmentation in applications such as urban planning and mapping. We also conducted a test on a large study area (city scale of Belval–Luxembourg). This demonstrates the capabilities of our method and its efficiency in segmenting satellite images from a large extent area and reinforces its potential for real-world applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.