Abstract

CT scan is an important reference means of disease diagnosis in practice. Automatic segmentation of organ regions can save a lot of time and labor costs, and allow doctors to produce more intuitive observations of the organization of the human body. However, automatic multi-organ segmentation in CT images remains challenging due to the complicated anatomical structures and low tissue contrast in CT images. Traditional segmentation methods are relatively inefficient for organ segmentation with large abdominal deformation, small volume, and blurry tissue boundaries, and the traditional network architectures are rarely designed to meet the requirements of lightweight and efficient clinical practice. In this paper, we propose a novel segmentation network named Self-Adjustable Organ Attention U-Net (SOA-Net) to overcome these limitations. To be a pragmatic solution for effective segmentation method, the SOA-Net includes multi-branches feature attention (MBFA) module and the feature attention aggregation (FAA) module. These two modules have multiple branches with different kernel sizes to capture different scales feature information based on multiple scales of the target organs. An adjustable attention is used on these branches to generate different sizes of the receptive fields in the fusion layer. On the whole, SOA-Net is a 3D self-adjustable organ aware deep network which can adaptively adjust their attention and receptive field sizes based on multiple scales of the target organs to realize the efficient segmentation of multiple abdominal organs. We evaluate our method on AbdomenCT-1K and AMOS2022 datasets and the final experiments proved that our model achieves the best segmentation performance compared with the state-of-the-art segmentation networks. (Our code will be publicly available soon).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call