Abstract

The presented paper deals with automatic medical image segmentation and their visualization in virtual reality, and presents a complete pipeline that learns to extract anatomical models from medical images and prepares them to be accurately visualized on a stereoscopic head mounted display. First, we analyze methods of medical image segmentation, and develop a model based on convolutional neural networks. Using an annotated dataset of 800 image slices from cardiac magnetic resonance we train and test the segmentation network to extract the left ventricular anatomy from the images. We further develop a post-processing pipeline that allows displaying the extracted models in virtual reality even on mobile devices. This serves to achieve low computational complexity while preserving high anatomical fidelity of the extracted models. Finally, we discuss how we built both front- and backend for a virtual reality web application using A-Frame and Entity-Component-System architectural pattern.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call