Abstract

ObjectiveThe objective of this research was to create a deep learning network that utilizes multiscale images for the classification of follicular thyroid carcinoma (FTC) and follicular thyroid adenoma (FTA) through preoperative US.MethodsThis retrospective study involved the collection of ultrasound images from 279 patients at two tertiary level hospitals. To address the issue of false positives caused by small nodules, we introduced a multi-rescale fusion network (MRF-Net). Four different deep learning models, namely MobileNet V3, ResNet50, DenseNet121 and MRF-Net, were studied based on the feature information extracted from ultrasound images. The performance of each model was evaluated using various metrics, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, F1 value, receiver operating curve (ROC), area under the curve (AUC), decision curve analysis (DCA), and confusion matrix.ResultsOut of the total nodules examined, 193 were identified as FTA and 86 were confirmed as FTC. Among the deep learning models evaluated, MRF-Net exhibited the highest accuracy and area under the curve (AUC) with values of 85.3% and 84.8%, respectively. Additionally, MRF-Net demonstrated superior sensitivity and specificity compared to other models. Notably, MRF-Net achieved an impressive F1 value of 83.08%. The curve of DCA revealed that MRF-Net consistently outperformed the other models, yielding higher net benefits across various decision thresholds.ConclusionThe utilization of MRF-Net enables more precise discrimination between benign and malignant thyroid follicular tumors utilizing preoperative US.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call