Abstract

Abstract Background: Mucinous colorectal carcinoma (CRC) is found in 10-20% of patients and is associated with worse prognosis and treatment resistance. The early identification of mucinous tumor component at baseline and monitoring resistant clones at follow-up is challenging in clinical practice, which hinders appropriate and timely treatment selection. At CT, being routinely acquired in clinical practice, mucinous tumors can be characterized by semantic features, such as hypoattenuation and more heterogeneous enhancement than the non-mucinous tumors (Wnorowski et al 2019). However, the diagnostic accuracy of such CT findings reaches at most 62% (Young et al 2007). This can be substantially improved by utilizing robust feature quantification using state-of-art machine learning and neural network techniques. Materials and Methods: 7 mucinous and 7 non-mucinous CRC CTs were included in the model development (80% training and 20% validation) and 2 mucinous and 2 non-mucinous independent patients were used to test the model performance. Multiple lesions (primary and metastatic) were semi-automatically segmented in 3D Slicer (N=32 development and N=12 test). Three different classification models were generated using CT images: (1) a logistic regression model based on a newly developed hypodense tissue connectivity (HTC) metric, (2) a logistic regression model using a set of automatically selected radiomics (RAD) features (shape, 1st order and 2nd order) and (3) a convolutional neural network model (CNN) based on the ResNet architecture and automatically selected features. HTC was computed as a ratio between the volume of the connected hypodense tissue (0<HU<30) and total tumor volume. CT images were converted to 2D axial sections to increase the input size for CNN (resulting in 952 images). Class weights were used to mitigate the effect of in-balance between the number of mucinous and non-mucinous images. Performance of the model was quantified with Accuracy (Acc), Sensitivity (Sen), Specificity (Spec) and AUC. Results: The classification performance of all three models was first assessed using 10-fold cross validation, where the RAD and CNN models performed better than HTC model. (Mean±std) RAD: Acc 0.9±0.15, Sen 0.92±0.15, Spec 0.9±0.2, AUC 0.95±0.13; CNN: Acc 0.93±0.05, Sen 0.93±0.03, Spec 0.93±0.19, AUC 0.95±0.01; HTC: Acc 0.72±0.23, Sen 0.7±0.4, Spec 0.75±0.33, AUC 0.84±0.32. However, the situation changed in test set, where HTC and CNN models provide the best results. RAD: Acc 0.67, Sen 1.0, Spec 0, AUC 0.5; CNN: Acc 0.97, Sen 0.98, Spec 0.96, AUC 0.97; HTC: Acc 0.92, Sen 1.0, Spec 0.75, AUC 1.0. Conclusions: All studied models improved the accuracy of mucinous lesion identification compared to the literature reported value. Further method development and validation on larger multi-center cohorts would allow gaining confidence in models applicability in the clinical setting. Citation Format: Kinga Bernatowicz, Raquel Perez Lopez, Hector Garcia Palmer, Elena Elez Fernandez, Jose Fernandez Navarro, Marta Ligero Hernandez, Alonso Garcia Ruiz, Manuel Escobar Amores. Humans cannot accurately detect mucinous colorectal carcinoma from CT images, can AI help? [abstract]. In: Proceedings of the AACR Virtual Special Conference on Artificial Intelligence, Diagnosis, and Imaging; 2021 Jan 13-14. Philadelphia (PA): AACR; Clin Cancer Res 2021;27(5_Suppl):Abstract nr PO-021.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.