In magnetic resonance image (MRI)-guided radiotherapy (MRgRT), 2D rapid imaging is commonly used to track moving targets with high temporal frequency to minimize gating latency. However, anatomical motion is not constrained to 2D, and a portion of the target may be missed during treatment if 3D motion is not evaluated. While some MRgRT systems attempt to capture 3D motion by sequentially tracking motion in 2D orthogonal imaging planes, this approach assesses 3D motion via independent 2D measurements at alternating instances, lacking a simultaneous 3D motion assessment in both imaging planes. We hypothesized that a motion model could be derived from prior 2D orthogonal imaging to estimate 3D motion in both planes simultaneously. We present a manifold learning technique to estimate 3D motion from 2D orthogonal imaging. Five healthy volunteers were scanned under an IRB-approved protocol using a 3.0 T Siemens Skyra simulator. Images of the liver dome were acquired during free breathing (FB) with a 2.6mm × 2.6mm in-plane resolution for approximately 10min in alternating sagittal and coronal planes at ∼5 frames per second. The motion model was derived using a combined manifold learning and alignment approach based on locally linear embedding (LLE). The model utilized the spatially overlapping MRI signal shared by both imaging planes to group together images that had similar signals, enabling motion estimation in both planes simultaneously. The model's motion estimates were compared to the ground truth motion derived in each newly acquired image using deformable registration. A simulated target was defined on the dome of the liver and used to evaluate model performance. The Dice similarity coefficient and distance between the model-tracked and image-tracked contour centroids were evaluated. Motion modeling error was estimated in the orthogonal plane by back-propagating the motion to the currently imaged plane and by interpolating the motion between image acquisitions where ground truth motion was available. The motion observed in the healthy volunteer studies ranged from 12.6 to 38.7mm. On average, the model demonstrated sub-millimeter precision and>0.95 Dice coefficient compared to the ground truth motion observed in the currently imaged plane. The average Dice coefficient and centroid distance between the model-tracked and ground truth target contours were 0.96±0.03 and 0.26mm±0.27mm respectively across all volunteer studies. The out-of-plane centroid motion error was estimated to be 0.85mm±1.07mm and 1.26mm±1.38mm using the back-propagation (BP) and interpolation error estimation methods. The healthy volunteer studies indicate promising results using the proposed motion modeling technique. Out-of-plane modeling error was estimated to be higher but still demonstrated sub-voxel motion accuracy.
Read full abstract