Abstract
Automatic object removal with obstructed facades completion in the urban environment is essential for many applications such as scene restoration, environmental impact assessment, and urban mapping. However, the previous object removal typically requires a user to manually create a mask around unwanted objects and obtain background facade information in advance, which would be labor-intensive when implementing multitasking projects. Moreover, accurately detecting objects to be removed in the cityscape and inpainting the static obstructed building facade to obtain plausible images are the main challenges for this objective. To overcome these difficulties, this study addresses the object removal with the facade inpainting problem from the following two aspects. First, we proposed an image-based cityscape elimination method for automatic object removal and facade inpainting by applying semantic segmentation to detect several classes, including pedestrians, riders, vegetation, and cars, as well as using generative adversarial networks (GANs) for filling detected regions by background textures and patching information from street-level imagery. Second, we proposed a workflow to filter unoccluded building facades from street view images automatically and tailored a dataset for the GAN-based image inpainting model with original and mask images. Furthermore, several full-reference image quality assessment (IQA) metrics are introduced to evaluate the generated image quality. Validation results demonstrated the feasibility and effectiveness of our proposed method, and the synthetic image is visually realistic and semantically consistent.
Highlights
Automatic object removal is a widely studied and fundamental task for environmental impact assessment due to the sheer number of unwanted objects that frequently occlude the scene hinders significant tasks such as stakeholder engagement [1] and design support [2]
The street view images were split into a training set and testing set with 2700 pictures. 2250 images were used for training (750 images for each class), accounting for 0.83 of the entire training set. 450 testing images accounted for 0.17 of the whole training set
The results show that the inpainting generative adversarial networks (GANs) method learned from massive data can effectively consider the image semantics and perform better than the exemplar-based approach in non-repeating and complicated scenes
Summary
Automatic object removal is a widely studied and fundamental task for environmental impact assessment due to the sheer number of unwanted objects (e.g., pedestrians, riders, vegetation, and cars) that frequently occlude the scene hinders significant tasks such as stakeholder engagement [1] and design support [2]. Object removal techniques combined with augmented reality (AR) can address the collision problem between planned design objects and existing objects. The newly designed 3D virtual objects will be intermingled with the existing ones, producing an inaccurate visualization. By eliminating and adding objects virtually in a perceived environment, the stakeholders can assess the future urban environment design [4]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.