Abstract

In cancer radiation therapy, large tumor motion due to respiration can lead to uncertainties in tumor target delineation and treatment delivery, thus making active motion management an essential step in thoracic and abdominal tumor treatment. In current practice, patients with tumor motion may be required to receive two sets of CT scans – the initial free-breathing 4-dimensional CT (4DCT) scan for tumor motion estimation and a second CT scan under appropriate motion management such as breath-hold or abdominal compression. The aim of this study is to assess the feasibility of a predictive model for tumor motion estimation in three-dimensional space based on machine learning algorithms. The model was developed based on sixteen imaging features extracted from non-4D diagnostic CT images and eleven clinical features extracted from the Electronic Health Record (EHR) database of 150 patients to characterize the lung tumor motion. A super-learner model was trained to combine four base machine learning models including the Random Forest, Multi-Layer Perceptron, LightGBM and XGBoost, the hyper-parameters of which were also optimized to obtain the best performance. The outputs of the super-learner model consist of tumor motion predictions in the Superior-Inferior (SI), Anterior-Posterior (AP) and Left-Right (LR) directions, and were compared against tumor motions measured in the free-breathing 4DCT scans. The accuracy of predictions was evaluated using Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) through ten rounds of independent tests. The MAE and RMSE of predictions in the SI direction were 1.23 mm and 1.70 mm; the MAE and RMSE of predictions in the AP direction were 0.81 mm and 1.19 mm, and the MAE and RMSE of predictions in the LR direction were 0.70 mm and 0.95 mm. In addition, the relative feature importance analysis demonstrated that the imaging features are of great importance in the tumor motion prediction compared to the clinical features. Our findings indicate that a super-learner model can accurately predict tumor motion ranges as measured in the 4DCT, and could provide a machine learning framework to assist radiation oncologists in determining the active motion management strategy for patients with large tumor motion.

Highlights

  • Respiratory motion poses a great challenge in the treatment of lung cancer with radiation therapy[1,2,3]

  • The imaging features demonstrated a higher degree of importance in motion prediction than the clinical features

  • In this work we proposed and implemented a machine learning pipeline to investigate the relationship of extensive input features and lung tumor motion ranges, and developed a super-learner model to predict the tumor motion ranges in three dimensions based on the diagnostic CT images and Electronic Health Record (EHR) data of the patient

Read more

Summary

Introduction

Respiratory motion poses a great challenge in the treatment of lung cancer with radiation therapy[1,2,3]. Target and normal tissue motion can be quite complex and patient-dependent[4] To address this issue in modern radiation therapy treatment planning, an internal margin is assigned based on the patient’s 4-dimensional Computed Tomography (4DCT) to form an Internal Target Volume (ITV)[5,6], where the extent of the tumor motion is included. Patients with a large tumor motion are more suitable for using an active motion management strategy such as a breath-hold technique[7] or the use of compression belt[8] to reduce the magnitude of tumor motions due to diaphragmatic breathing[9]. These active motion management procedures, require extra steps in the simulation workflow since the need for an additional simulation

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call