Abstract

Minimally invasive surgery (MIS) has changed not only the performance of specific operations but also the more effective strategic approach to all surgeries. Expansion of MIS to more complex surgeries demands further development of new technologies, including robotic surgical systems, navigation, guidance, visualizations, dexterity enhancement, and 3D printing technology. In the cardiovascular domain, 3D printed modeling can play a crucial role in providing improved visualization of the anatomical details and guide precision operations as well as functional evaluation of various congenital and congestive heart conditions. In this work, we propose a novel deep learning-driven tracking method for providing quantitative 3D tracking of mock cardiac interventions on custom-designed 3D printed heart phantoms. In this study, the position of the tip of a catheter is tracked from bi-plane fluoroscopic images. The continuous positioning of the catheter relative to the 3D printed model was co-registered in a single coordinate system using external fiducial markers embedded into the model. Our proposed method has the potential to provide quantitative analysis for training exercises of percutaneous procedures guided by bi-plane fluoroscopy.

Highlights

  • Since minimally invasive surgery (MIS) emerged in the 1980s, surgical skills and minimally invasive equipment have achieved significant advancements[1,2,3]

  • The minimally invasive approach holds a unique place for various surgical specialties, such as general surgery, urology[4], thoracic surgery[5], plastic surgery[6], and cardiac surgery[7]

  • We propose a novel deep learning-driven method for tracking a catheter in a 3D printed model from bi-plane fluoroscopic images acquired during the procedure

Read more

Summary

INTRODUCTION

Since minimally invasive surgery (MIS) emerged in the 1980s, surgical skills and minimally invasive equipment have achieved significant advancements[1,2,3]. Our group recently reported a novel training system that provides catheter navigation in mixed reality (MR), with real-time visual feedback of a physical catheter’s position within a patient-specific 3D heart model[35]. This method used electromagnetic (EM) sensors to track the catheter position. This method is advantageous for portability, it has a low accuracy (up to ~5 mm), requires manual integration of sensors into a catheter, and the hardware not readily available in catheterization labs To address these limitations, we propose a novel deep learning-driven method for tracking a catheter in a 3D printed model from bi-plane fluoroscopic images acquired during the procedure. Our proposed method has the potential to provide quantitative analysis for training exercises of percutaneous procedures guided by biplane fluoroscopy

Methodology
AND DISCUSSION
Method Indexes
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.