Abstract

This study aims to display the efficient generation of three-dimensional (3D) models of medical scans and the ability for physicians to utilize the models via augmented reality (AR) head mount displays. The ability to view and interact with 3D models of patients’ medical scans on head mount displays (HMD) such as the <i>Microsoft HoloLens 2</i>, opens a wide range of new possibilities for more accurate and intuitive ways for physicians to approach preoperative and intraoperative planning. Traditionally, the manual workflow for generating AR models of medical scans involves various software packages that are required to manually carry out steps such as image segmentation, mesh refinement, and file conversion<sup>1</sup>. Our web-based application automates the steps involved in generating AR models with end-to-end integration from image uploading to viewing and annotating the 3D model collaboratively on multiple AR headsets simultaneously. In addition to the main functions involving automated segmentation, interpolation, resizing, and the cropping of uploaded DICOM images, users can now automatically convert the files into a point cloud (PLY), which can be viewed and interacted with through a preview screen implemented onto the web app. Furthermore, these 3D models can be directly uploaded to an AR headset to be viewed and annotated by multiple AR headsets simultaneously using the AR app developed for this workflow.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call