Abstract

There are undetectable levels of fat in fat-poor angiomyolipoma. Thus, it is often misdiagnosed as renal cell carcinoma. We aimed to develop and evaluate a multichannel deep learning model for differentiating fat-poor angiomyolipoma (fp-AML) from renal cell carcinoma (RCC). This two-center retrospective study included 320 patients from the First Affiliated Hospital of Sun Yat-Sen University (FAHSYSU) and 132 patients from the Sun Yat-Sen University Cancer Center (SYSUCC). Data from patients at FAHSYSU were divided into a development dataset (n = 267) and a hold-out dataset (n = 53). The development dataset was used to obtain the optimal combination of CT modality and input channel. The hold-out dataset and SYSUCC dataset were used for independent internal and external validation, respectively. In the development phase, models trained on unenhanced CT images performed significantly better than those trained on enhanced CT images based on the fivefold cross-validation. The best patient-level performance, with an average area under the receiver operating characteristic curve (AUC) of 0.951 ± 0.026 (mean ± SD), was achieved using the "unenhanced CT and 7-channel" model, which was finally selected as the optimal model. In the independent internal and external validation, AUCs of 0.966 (95% CI 0.919-1.000) and 0.898 (95% CI 0.824-0.972), respectively, were obtained using the optimal model. In addition, the performance of this model was better on large tumors (≥ 40mm) in both internal and external validation. The promising results suggest that our multichannel deep learning classifier based on unenhanced whole-tumor CT images is a highly useful tool for differentiating fp-AML from RCC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call