Abstract

AbstractThis study presents a novel, deep‐learning‐based model for the automated reconstruction of a cross‐sectional drawing from stereo photographs. Targeted cross‐sections captured in stereo photographs are detected and translated into sectional drawings using faster region‐based convolutional neural networks and Pix2Pix generative adversarial network. To address the challenge of perspective correction in the photographs, a novel camera pose optimization method is introduced and employed. This method eliminates the need for camera calibration and image matching, thereby offering greater flexibility in camera positioning and facilitating the use of telephoto lenses while avoiding image‐matching errors. Moreover, synthetic image datasets are used for training to facilitate the practical implementation of the proposed model in construction industry applications, considering the limited availability of open datasets in this field. The applicability of the proposed model was evaluated through experiments conducted on the cross‐sections of curtain wall components. The results demonstrated superior measurement accuracy, compared with those of current methods of laser scanning or camera‐based measurements for construction components.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call