Abstract

Due to the complexity of the anatomical structure for human organs, medical image segmentation is always a challenging computer vision task. The Convolutional Neural Network (CNN) requires a rich feature representation, which not only needs the convolutional layers from shallow to deep,but also requires the resolution from small to large. Although CNN can be used to fuse mid-level features that are employed short-cutting, this just is a simple “shallow” connection. Thus, how to obtain useful features and how to utilize these features to improve the segmentation processes are still the key issues. In this paper, Multi-features Refinement and Aggregation (MRA) makes full use of hierarchical features by using the features fusion on several levels, and reveal the importance of refinement and aggregation of features in the medical image segmentation process. The network get low-level, high-level and even mid-level features by sampling. After aggregation and re-extraction, these features are more effectively combined. Experiment results show that our method can significantly improve segmentation accuracy compared to existing feature fusion schemes. And our approach is generalized to different backbone networks with consistent accuracy gain in brain segmentation, and it sets a new state-of-the-art in the Brat-2015 benchmarks.

Highlights

  • Intracranial tumors are known as brain tumors, and their causes are still unknown

  • Inspired by the idea of RefineNet, we further studied the influence of feature refinement on the experimental results based on these architectures, and combined the key ideas of skip connections, we proposed Multi-features Refinement and Aggregation Network (MRANet)

  • By borrowing the idea from the Squeeze-and-Excitation Networks (SENet), which achieves a good performance in natural image recognition, the attention mechanism is adopted to improve the residual structure as the Residual Conv (RC)-B in

Read more

Summary

INTRODUCTION

Intracranial tumors are known as brain tumors, and their causes are still unknown. Due to the UNet directly connects the high-level and low-level features to recover the information loss during the down-sampling process, there is a limitation, i.e., after the up-sampling and the down-sampling process, there is a ‘semantic gap’ between two layers at the same level, especially for the medical image [23]. This problem is very prominent and the direct connection will lead to unpredictable consequences.

RELATED WORK
METHOD MRANet contains three modules
LOSS FUNCTION
DATASET
EXPERIMENT
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call