Abstract
When applying deep learning algorithms to magnetic resonance (MR) image segmentation, a large number of annotated images are required as data support. However, the specificity of MR images makes it difficult and costly to acquire large amounts of annotated image data. To reduce the dependence of MR image segmentation on a large amount of annotated data, this paper proposes a meta-learning U-shaped network (Meta-UNet) for few-shot MR image segmentation. Meta-UNet can use a small amount of annotated image data to complete the task of MR image segmentation and obtain good segmentation results. Meta-UNet improves U-Net by introducing dilated convolution, which can increase the receptive field of the model to improve the sensitivity to targets of different scales. We introduce the attention mechanism to improve the adaptability of the model to different scales. We introduce the meta-learning mechanism, and employ a composite loss function for well-supervised and effective bootstrapping of model training. We use the proposed Meta-UNet model to train on different segmentation tasks, and then use the trained model to evaluate on a new segmentation task, where the Meta-UNet model achieves high-precision segmentation of target images. Meta-UNet has a certain improvement in mean Dice similarity coefficient (DSC) compared with voxel morph network (VoxelMorph), data augmentation using learned transformations (DataAug) and label transfer network (LT-Net). Experiments show that the proposed method can effectively perform MR image segmentation using a small number of samples. It provides a reliable aid for clinical diagnosis and treatment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Sheng wu yi xue gong cheng xue za zhi = Journal of biomedical engineering = Shengwu yixue gongchengxue zazhi
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.