Abstract

Deep learning techniques can help minimize inter-physician analysis variability and the medical expert workloads, thereby enabling more accurate diagnoses. However, their implementation requires large-scale annotated dataset whose acquisition incurs heavy time and human-expertise costs. Hence, to significantly minimize the annotation cost, this study presents a novel framework that enables the deployment of deep learning methods in ultrasound (US) image segmentation requiring only very limited manually annotated samples. We propose SegMix, a fast and efficient approach that exploits a segment-paste-blend concept to generate large number of annotated samples based on a few manually acquired labels. Besides, a series of US-specific augmentation strategies built upon image enhancement algorithms are introduced to make maximum use of the available limited number of manually delineated images. The feasibility of the proposed framework is validated on the left ventricle (LV) segmentation and fetal head (FH) segmentation tasks, respectively. Experimental results demonstrate that using only 10 manually annotated images, the proposed framework can achieve a Dice and JI of 82.61% and 83.92%, and 88.42% and 89.27% for LV segmentation and FH segmentation, respectively. Compared with training using the entire training set, there is over 98% of annotation cost reduction while achieving comparable segmentation performance. This indicates that the proposed framework enables satisfactory deep leaning performance when very limited number of annotated samples is available. Therefore, we believe that it can be a reliable solution for annotation cost reduction in medical image analysis.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.