Abstract

The fusion technology of videos has been increasingly applied in various fields such as smart cities and smart agriculture, playing a significant role in enabling real-time scene monitoring and decision-making for surveillance personnel. Currently, conventional methods for integrating geographic data with monitoring videos involve mapping two-dimensional geographic information data onto surveillance videos. However, this method exhibits significant mapping errors in scenarios with large terrain variations and 360 degrees multi-angle real-time previews. To address this issue, this paper proposes a fusion method of monitoring videos and geospatial data based on three-dimensional modeling. This method constructs a virtual scene using digital elevation models and vector geographic data and overlays the images captured under the camera viewport in the virtual scene with each frame of the video images in the monitoring video stream, thereby enhancing the video scene based on geographic data. In practical application scenarios, a system for integrating monitoring videos with geospatial data is designed and implemented. Experimental results demonstrate that this method effectively addresses issues such as unfamiliarity with the geographical environment, ambiguity of location information, inaccuracies in mapping due to terrain variations and changes in camera intrinsic parameters, thus showing superior applicability.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.