Abstract

The automatic segmentation of medical images is an important task in clinical applications. However, due to the complexity of the background of the organs, the unclear boundary, and the variable size of different organs, some of the features are lost during network learning, and the segmentation accuracy is low. To address these issues, this prompted us to study whether it is possible to better preserve the deep feature information of the image and solve the problem of low segmentation caused by unclear image boundaries. In this study, we (1) build a reliable deep learning network framework, named BGRANet,to improve the segmentation performance for medical images; (2) propose a packet rotation convolutional fusion encoder network to extract features; (3) build a boundary enhanced guided packet rotation dual attention decoder network, which is used to enhance the boundary of the segmentation map and effectively fuse more prior information; and (4) propose a multi-resolution fusion module to generate high-resolution feature maps. We demonstrate the effffectiveness of the proposed method on two publicly available datasets. BGRANet has been trained and tested on the prepared dataset and the experimental results show that our proposed model has better segmentation performance. For 4 class classifification (CHAOS dataset), the average dice similarity coeffiffifficient reached 91.73%. For 2 class classifification (Herlev dataset), the prediction, sensitivity, specifificity, accuracy, and Dice reached 93.75%, 94.30%, 98.19%, 97.43%, and 98.08% respectively. The experimental results show that BGRANet can improve the segmentation effffect for medical images. We propose a boundary-enhanced guided packet rotation dual attention decoder network. It achieved high segmentation accuracy with a reduced parameter number.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call