Abstract

Intraoperative identification of head and neck cancer tissue is essential to achieve complete tumor resection and mitigate tumor recurrence. Mesoscopic fluorescence lifetime imaging (FLIm) of intrinsic tissue fluorophores emission has demonstrated the potential to demarcate the extent of the tumor in patients undergoing surgical procedures of the oral cavity and the oropharynx. Here, we report FLIm-based classification methods using standard machine learning models that account for the diverse anatomical and biochemical composition across the head and neck anatomy to improve tumor region identification. Three anatomy-specific binary classification models were developed (i.e., "base of tongue," "palatine tonsil," and "oral tongue"). FLIm data from patients (N = 85) undergoing upper aerodigestive oncologic surgery were used to train and validate the classification models using a leave-one-patient-out cross-validation method. These models were evaluated for two classification tasks: (1) to discriminate between healthy and cancer tissue, and (2) to apply the binary classification model trained on healthy and cancer to discriminate dysplasia through transfer learning. This approach achieved superior classification performance compared to models that are anatomy-agnostic; specifically, a ROC-AUC of 0.94 was for the first task and 0.92 for the second. Furthermore, the model demonstrated detection of dysplasia, highlighting the generalization of the FLIm-based classifier. Current findings demonstrate that a classifier that accounts for tumor location can improve the ability to accurately identify surgical margins and underscore FLIm's potential as a tool for surgical guidance in head and neck cancer patients, including those subjects of robotic surgery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call