Abstract3D real scenes are a digital virtual space that represents and portrays the real world in a photorealistic, three‐dimensional, and sequential manner. Existing methods for constructing and updating 3D models, such as oblique photography and laser scanning, are difficult to meet the demand of perceiving the real world intuitively, dynamically, and in real time. In recent years, the method of integrating rapidly rising video data and 3D models has become increasingly popular. Compared with existing methods, it enhances the real‐time perception of 3D scenes by taking advantage of the real‐time character of videos and the intuitive character of 3D models. In this article, we propose a real‐time fusion method of multiple videos and 3D real scenes based on optimal viewpoint selection. To begin, 3D reconstruction and video camera calibration were used to prepare the basic data for the fusion of videos and 3D model. Second, a visible‐surface detection‐based video space restoration method was provided, and the overlapping region between multiple videos was determined. Third, to split the overlapping region into the corresponding camera spaces, a segmentation method based on optimal viewpoint selection was given. Finally, the 2D videos were dynamically fitted to the 3D model using the dynamic texture mapping method, while accomplishing the fusing and rendering of the 3D real scene. After experimental verification, the overall effect of the multiple videos and 3D real scene fusion system implemented using the method proposed in this article is better, while the algorithm is less time‐consuming and efficient in rendering.
Read full abstract