Abstract

Prompt and correct diagnosis of benign and malignant thyroid nodules has always been a core issue in the clinical practice of thyroid nodules. Ultrasound imaging is one of the most common visualizing tools used by radiologists to identify the nature of thyroid nodules. However, visual assessment of nodules is difficult and often affected by inter- and intraobserver variabilities. This paper proposes a novel hybrid approach based on machine learning and information fusion to discriminate the nature of thyroid nodules. Statistical features are extracted from the B-mode ultrasound image while deep features are extracted from the shear-wave elastography image. Classifiers including logistic regression, Naive Bayes, and support vector machine are adopted to train classification models with statistical features and deep features, respectively, for comparison. A voting system with certain criteria is used to combine two classification results to obtain a better performance. Experimental and comparison results demonstrate that the proposed method classifies the thyroid nodules correctly and efficiently.

Highlights

  • Deep learning models, especially convolution neural networks (CNNs), have received great attention in image classification and target recognition [11]

  • We propose a hybrid approach combining models trained with traditional features extracted from B-US images and deep features extracted from SWE-US images for thyroid nodule classification task

  • When a patient undergoes multiple biopsies, the gold standard for final diagnosis will be determined according to the following priorities: excisional biopsy, core needle biopsy, and FNA biopsy. ere are 490 images in total (B-US and SWE-US each account for half ), consisting of 145 images of benign nodules and 100 images of malignant nodules. is retrospective study was approved by the institutional review board, and the informed consent was obtained from all patients

Read more

Summary

Introduction

Deep learning models, especially convolution neural networks (CNNs), have received great attention in image classification and target recognition [11]. We propose a hybrid approach combining models trained with traditional features extracted from B-US images and deep features extracted from SWE-US images for thyroid nodule classification task. We employ a pretrained CNN model, which is transfer learned from ImageNet, as a feature extractor to draw deep features from SWE-US image dataset. We compare the classifiers trained with features extracted from each layers of CNNs to find the most discriminative classifier for the classification task. A voting system including pessimistic, optimistic, and compromise criteria is designed and conducted to combine predictive results from different classifiers together to obtain a better classification performance. (2) e classifiers trained with features extracted from each layers of CNNs are compared to find the most discriminative classifier for the nodule classification task. (3) e performance of different decision-making strategies on the classification results is compared and analyzed, and reasonable suggestions are put forward

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.