Abstract

Medical image segmentation plays an important role in diagnosis. Since the introduction of U-Net, numerous advancements have been implemented to enhance its performance and expand its applicability. The advent of Transformers in computer vision has led to the integration of self-attention mechanisms into U-Net, resulting in significant breakthroughs. However, the inherent complexity of Transformers renders these networks computationally demanding and parameter-heavy. Recent studies have demonstrated that multilayer perceptrons (MLPs), with their simpler architecture, can achieve comparable performance to Transformers in natural language processing and computer vision tasks. Building upon these findings, we have enhanced the previously proposed "Enhanced-Feature-Four-Fold-Net" (EF 3-Net) by introducing an MLP-attention block to learn long-range dependencies and expand the receptive field. This enhanced network is termed "MLP-Attention Enhanced-Feature-four-fold-Net", abbreviated as "MAEF-Net". To further enhance accuracy while reducing computational complexity, the proposed network incorporates additional efficient design elements. MAEF-Net was evaluated against several general and specialized medical image segmentation networks using four challenging medical image datasets. The results demonstrate that the proposed network exhibits high computational efficiency and comparable or superior performance to EF 3-Net and several state-of-the-art methods, particularly in segmenting blurry objects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call