Abstract
This paper establishes a fully automatic real-time image segmentation and recognition system for breast ultrasound intervention robots. It adopts the basic architecture of a U-shaped convolutional network (U-Net), analyses the actual application scenarios of semantic segmentation of breast ultrasound images, and adds dropout layers to the U-Net architecture to reduce the redundancy in texture details and prevent overfitting. The main innovation of this paper is proposing an expanded training approach to obtain an expanded of U-Net. The output map of the expanded U-Net can retain texture details and edge features of breast tumours. Using the grey-level probability labels to train the U-Net is faster than using ordinary labels. The average Dice coefficient (standard deviation) and the average IOU coefficient (standard deviation) are 90.5% (±0.02) and 82.7% (±0.02), respectively, when using the expanded training approach. The Dice coefficient of the expanded U-Net is 7.6 larger than that of a general U-Net, and the IOU coefficient of the expanded U-Net is 11 larger than that of the general U-Net. The context of breast ultrasound images can be extracted, and texture details and edge features of tumours can be retained by the expanded U-Net. Using an expanded U-Net can quickly and automatically achieve precise segmentation and multi-class recognition of breast ultrasound images.
Highlights
Breast cancer is the most common malignancy in women and the main cause of cancer deaths among women worldwide [1]
The present study focuses on highlighting the contributions of fully automatic real-time ultrasound image segmentation and recognition techniques for breast ultrasound intervention robots to assist doctors in an operating biopsy presenting an extensive review of the state-of-the-art Ushaped convolutional network (U-Net) [7]
The main innovations of this paper are briefly summarized as follows: the research adopts the basic architecture of a U-shaped convolutional network (U-Net), analyses the actual application scenarios of semantic segmentation of breast ultrasound images, adds dropout layers to the U-Net architecture to reduce the redundancy in texture details and prevent overfitting [8], and proposes an expanded training approach to obtain the expanded U-Net
Summary
Breast cancer is the most common malignancy in women and the main cause of cancer deaths among women worldwide [1]. During a breast biopsy with a medical robot, segmenting the breast ultrasound images into all major functional tissues automatically with high accuracy is of primary importance [5]. Some superior convolutional neural networks have been developed in the field of medical images [6]. These approaches have been a natural choice for breast ultrasound images. The present study focuses on highlighting the contributions of fully automatic real-time ultrasound image segmentation and recognition techniques for breast ultrasound intervention robots to assist doctors in an operating biopsy presenting an extensive review of the state-of-the-art Ushaped convolutional network (U-Net) [7]. This paper uses a data enhancement approach based on geometric transformation to expand the scale of the dataset and solve the problem of overfitting during network training [9]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.