Abstract

Virtual reality (VR) surgical training and presurgical planning require the creation of 3D virtual models of patient anatomy from medical scan data. Real-time head tracking in VR applications allows users to navigate in the virtual anatomy from any 3D position and orientation. The process of interactively rendering highly-detailed 3D volumetric data of anatomical models from a dynamically changing observer's perspective is extremely demanding for computational resources. Parallel computing presents a solution to this problem, involving a distributed volume graphics rendering system composed of multiple nodes concurrently working on different portions of the output streaming, which are later integrated to form the final view. This paper presents a distributed graphics rendering system consisting of multiple GPU-based heterogeneous nodes running a best-effort rendering scheme. Experiments show promising results in terms of efficiency and performance for rendering medical volumes in real time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call