Abstract

GPU-assisted multi-eld rendering provides a means of generating effective video volume visualization that can convey both the objects in a spatiotemporal domain as well as the motion status of these objects. In this paper, we present a technical framework that enables combined volume and ow visualization of a video to be synthesized using GPU-based techniques. A bricking-based volume rendering method is deployed for handling large video datasets in a scalable manner, which is particularly useful for synthesizing a dynamic visualization of a video stream. We have implemented a number of image processing lter s, and in particular, we employ an optical ow lter for estimating motion ows in a video. We have devised mechanisms for combining volume objects in a scalar eld with glyph and streamline geometry from an optical ow . We demonstrate the effectiveness of our approach with example visualizations constructed from two benchmarking problems in computer vision.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.