Abstract

AbstractAugmented Virtual Environments (AVE) or Virtual‐Reality Fusion systems fuse dynamic videos with static three‐dimensional (3D) models of a virtual environment to provide an optimal solution for visualizing and understanding multichannel surveillance systems. However, texture distortion caused by viewpoint changes in such systems is a critical issue that needs to be addressed. To minimize texture fusion distortion, this paper presents a novel virtual environment system in two phases, offline and online phases, to dynamically fuse multiple surveillance videos with a virtual 3D scene. In the offline phase, a static virtual environment is obtained by performing a 3D photogrammetric reconstruction from the input images of the scene. In the online phase, the virtual environment is augmented by fusing multiple videos through two optional strategies. One strategy is to dynamically map images of different videos onto a 3D model of the virtual environment, and the other is to extract moving objects and represent them as billboards. The system can be used to visualize a 3D environment from any viewpoint augmented by real‐time videos. Experiments and user studies in different scenarios demonstrate the superiority of our system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call