Abstract

PurposeGuidance and quality control in orthopedic surgery increasingly rely on intra-operative fluoroscopy using a mobile C-arm. The accurate acquisition of standardized and anatomy-specific projections is essential in this process. The corresponding iterative positioning of the C-arm is error prone and involves repeated manual acquisitions or even continuous fluoroscopy. To reduce time and radiation exposure for patients and clinical staff and to avoid errors in fracture reduction or implant placement, we aim at guiding—and in the long-run automating—this procedure.MethodsIn contrast to the state of the art, we tackle this inherently ill-posed problem without requiring patient-individual prior information like preoperative computed tomography (CT) scans, without the need of registration and without requiring additional technical equipment besides the projection images themselves. We propose learning the necessary anatomical hints for efficient C-arm positioning from in silico simulations, leveraging masses of 3D CTs. Specifically, we propose a convolutional neural network regression model that predicts 5 degrees of freedom pose updates directly from a first X-ray image. The method is generalizable to different anatomical regions and standard projections.ResultsQuantitative and qualitative validation was performed for two clinical applications involving two highly dissimilar anatomies, namely the lumbar spine and the proximal femur. Starting from one initial projection, the mean absolute pose error to the desired standard pose is iteratively reduced across different anatomy-specific standard projections. Acquisitions of both hip joints on 4 cadavers allowed for an evaluation on clinical data, demonstrating that the approach generalizes without retraining.ConclusionOverall, the results suggest the feasibility of an efficient deep learning-based automated positioning procedure, which is trained on simulations. Our proposed 2-stage approach for C-arm positioning significantly improves accuracy on synthetic images. In addition, we demonstrated that learning based on simulations translates to acceptable performance on real X-rays.

Highlights

  • Mobile fluoroscopic imaging is used to guide interventions in orthopedic and trauma surgery and to evaluate the success of the fracture reduction and implant placement

  • We report the angle between the principal rays of the ground truth standard beam direction

  • We evaluate the precision of our underlying ground truth (“Precision of reference standard projections” section) and presents experiments for C-arm positioning conducted on synthetic (“Pose estimation for standard projections on synthetic x-rays” section) and real X-rays (“Pose estimation for standard projections on real x-rays” section)

Read more

Summary

Introduction

Mobile fluoroscopic imaging is used to guide interventions in orthopedic and trauma surgery and to evaluate the success of the fracture reduction and implant placement. An essential task in image-guided surgery is the generation of a correct standard projection of the anatomy for medical verification [1]. The correct projection corresponds to a specific pose of the C-arm relative to the patient’s positioning. It is challenging to obtain the desired view due to variabilities in patient placement and because the internal anatomy is not visible from outside. International Journal of Computer Assisted Radiology and Surgery (2020) 15:1095–1105 effects of projective simplification, increasing the risk of overlooked errors. Examples of critical errors include the malunion of fractures, leading to functional impairment and in the worst case, requiring a subsequent intervention at increased rates of complication

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call