Abstract

Automatic brain structure segmentation in Magnetic Resonance Image (MRI) plays an important role in the diagnosis of various neuropsychiatric diseases. However, most existing methods yield unsatisfactory results due to blurred boundaries and complex structures. Improving the segmentation ability requires the model to be explicit about the spatial localization and shape appearance of targets, which correspond to the low-frequency content features and the high-frequency edge features, respectively. Therefore, in this paper, to extract rich edge and content feature representations, we focus on the composition of the feature and utilize a frequency decoupling (FD) block to separate the low-frequency and high-frequency parts of the feature. Further, a novel edge-aware network (EA-Net) is proposed for jointly learning to segment brain structures and detect object edges. First, an encoder–decoder sub-network is utilized to obtain multi-level information from the input MRI, which is then sent to the FD block to complete the frequency separation. Further, we use different mechanisms to optimize both the low-frequency and high-frequency features. Finally, these two parts are fused to generate the final prediction. In particular, we extract the content mask and the edge mask from the optimization feature with different supervisions, which forces the network to learn the boundary features of the object. Extensive experiments are performed on two public brain MRI T1 scan datasets (the IBSR dataset and the MALC dataset) to evaluate the effectiveness of the proposed algorithm. The experiments show that the EA-Net achieves the best performance compared with the state-of-the-art methods, and improves the segmentation DSC score by 1.31% at most compared with the U-Net model and its variants. Moreover, we evaluate the EA-Net under different noise disturbances, and the results demonstrate the robustness and superiority of our method under low-quality noisy MRI. Code is available at https://github.com/huqian999/EA-Net.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call