Abstract
Ultrasonography is widely used in the clinical diagnosis of thyroid nodules. Ultrasound images of thyroid nodules have different appearances, interior features, and blurred borders that are difficult for a physician to diagnose into malignant or benign types merely through visual recognition. The development of artificial intelligence, especially deep learning, has led to great advances in the field of medical image diagnosis. However, there are some challenges to achieve precision and efficiency in the recognition of thyroid nodules. In this work, we propose a deep learning architecture, you only look once v3 dense multireceptive fields convolutional neural network (YOLOv3-DMRF), based on YOLOv3. It comprises a DMRF-CNN and multiscale detection layers. In DMRF-CNN, we integrate dilated convolution with different dilation rates to continue passing the edge and the texture features to deeper layers. Two different scale detection layers are deployed to recognize the different sizes of the thyroid nodules. We used two datasets to train and evaluate the YOLOv3-DMRF during the experiments. One dataset includes 699 original ultrasound images of thyroid nodules collected from a local health physical center. We obtained 10,485 images after data augmentation. Another dataset is an open-access dataset that includes ultrasound images of 111 malignant and 41 benign thyroid nodules. Average precision (AP) and mean average precision (mAP) are used as the metrics for quantitative and qualitative evaluations. We compared the proposed YOLOv3-DMRF with some state-of-the-art deep learning networks. The experimental results show that YOLOv3-DMRF outperforms others on mAP and detection time on both the datasets. Specifically, the values of mAP and detection time were 90.05 and 95.23% and 3.7 and 2.2 s, respectively, on the two test datasets. Experimental results demonstrate that the proposed YOLOv3-DMRF is efficient for detection and recognition of thyroid nodules for ultrasound images.
Highlights
With its ever-increasing incidence, the thyroid nodule is one of the most common nodular tumors in the adult population [1, 2]. e timely diagnosis of thyroid nodules is extremely essential
Datasets and Evaluation Metrics. e dataset used in this study was obtained from 240 patients with 699 ultrasound images of thyroid nodules, which were followed by fine needle aspiration biopsy (FNAB). ey were collected from the physical health center of a local 3A hospital. ese ultrasound images belong to 34 males and 177 females
The experiments I, II, and III only combine two different dilation rates, and the mean average precision (mAP) increases by 2.89%, 1.53%, and 1.99%, respectively, compared to the 85.43% (Table 4, VI). ese results revalidate that adding dilated convolution can increase the performance to some extent
Summary
With its ever-increasing incidence, the thyroid nodule is one of the most common nodular tumors in the adult population [1, 2]. e timely diagnosis of thyroid nodules is extremely essential. Doctors typically diagnose thyroid nodules by experience This method could result in an ambiguous diagnosis [4], thereby causing excessive treatments such as unnecessary biopsy and surgery. Good old-fashioned artificial intelligence (GOFAI) and handcrafted features. The two main drawbacks of GOFAI and the handcrafted features method are their high time complexity and unsatisfactory universality. E development of artificial intelligence, especially deep learning, has brought excellent advances in the field of medical image diagnosis. We propose a deep learning architecture, you only look once v3 dense multireceptive fields convolutional neural network (YOLOv3-DMRF), based on YOLOv3. It comprises a dense multireceptive fields convolutional neural network (DMRF-CNN) and multiscale detection layers. The architecture YOLOv3DMRF is presented to complete the detection and recognition
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.