Abstract

<h3>Purpose/Objective(s)</h3> This study investigates the feasibility of using a commercially available handheld device with augmented reality (AR) cameras, enhanced by light detection and ranging (LiDAR), for pre-treatment collision detection and patient positioning. The on-device LiDAR scanner detects object depths with direct time-of-flight (ToF) measurements, thus improving the accuracy of contrast-based visual depth estimation. Our proposed approach leverages AR for near-real-time reconstruction of patient external contour during simulation and setup. This is advantageous compared with conventional laser-tattoo alignment and photo-based patient position, by providing an intuitive 3D rendering of the patient's body and immobilization devices at arbitrary angles. <h3>Materials/Methods</h3> Two possible modes for AR-assisted positioning were investigated: mesh and point-cloud mode. In mesh mode, surface mesh of a mock pediatric phantom (50 × 40 × 15 cm<sup>3</sup>) was rapidly reconstructed via the default meshing algorithm from the software development kit (SDK). Resulting 3D contours were manually registered to a computerized tomography (CT) scan of the phantom in a third-party open source image processing suite. For point cloud mode, surface anchor points of the phantom and its supporting devices were detected and placed in the real-world frame of reference, which were subsequently overlaid on the real-time color image. Each handheld scanning session was kept at 30 seconds. The native pulse intervals of 0.2 – 0.5 nsec of the LiDAR scanner were used; depth map was updated at 60 frames per second (fps). Scanning method was kept consistent and mimicked CT scans, with the device camera rotated about the superior-inferior axis pointing towards the phantom. The resolution of the resulting surface mesh and point cloud was quantified. Extent of the volume reconstructed was estimated. <h3>Results</h3> The scan and on-device real-time reconstruction were completed within 30 seconds. We successfully extracted the reconstructed mesh and registered that to a CT scan of the phantom. A mean distance of 1.5 ± 1.0 cm between each adjacent vertex was achieved. The point cloud density was on an average 20.9 ± 0.9 points/cm<sup>2</sup>, with decreased point density 35cm superior to the phantom midpoint (10.0 points/cm<sup>2</sup>). Both modes captured approximately 4 × 4 × 4m<sup>3</sup> volume of the phantom and its supporting structures. <h3>Conclusion</h3> AR has emerged as a promising modality for surface guided patient positioning, yet several previous studies have found a lack of sufficient mesh resolution and/or accuracy. LiDAR improves both aspects with directly measured depth information. We demonstrated that sub-cm full 3D rendering of the patient body contour and supporting device is achievable with a handheld device in under 30 seconds, which could provide a viable option for improved guidance on collision check and patient positioning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call