Abstract

Image segmentation is an essential step in vision sensing and image processing. It enables the understanding of the object’s classes, spatial locations, and extents in the scene, which can be used to support a wide range of construction applications such as progress monitoring, safety management, and productivity analysis. The recent ground-breaking achievements of deep learning-based approaches for semantic segmentation are at the cost of expensive large-scale training datasets annotated at the pixel level. Although building information modeling (BIM) has been leveraged to alleviate labeling costs using automatically generated, color-coded images as semantic labels, the differences between the BIM models and the real-world scenes make it difficult to apply networks trained on BIM-generated labels to real images. Furthermore, it takes nontrivial efforts to reduce such differences. To address these problems, this paper proposes a weakly supervised segmentation approach that uses inexpensive image-level labels. The missing boundary information in image-level labels is compensated by BIM-extracted object information. The proposed method consists of three modules: (1) detect initial object locations from image-level labels; (2) extract object information from BIM as prior knowledge; and (3) incorporate the prior knowledge into the network to enhance the detected object locations. Three extensive experiments are designed to evaluate the effectiveness of the proposed method. Results show that the proposed method substantially improves the detected object areas by using prior knowledge of target objects from BIM and outperforms the state-of-the-art weakly supervised methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.