Abstract
Medical image segmentation with the convolutional neural networks (CNNs), significantly enhances clinical analysis and disease diagnosis. However, medical images inherently exhibit large intra-class variability, minimal inter-class differences, and substantial noise levels. Extracting robust contextual information and aggregating discriminative features for fine-grained segmentation remains a formidable task. Additionally, existing methods often struggle with producing accurate mask edges, leading to blurred boundaries and reduced segmentation precision. This paper introduces a novel Edge-guided and Hierarchical Aggregation Network (EHANet) which excels at capturing rich contextual information and preserving fine spatial details, addressing the critical issues of inaccurate mask edges and detail loss prevalent in current segmentation models. The Inter-layer Edge-aware Module (IEM) enhances edge prediction accuracy by fusing early encoder layers, ensuring precise edge delineation. The Efficient Fusion Attention Module (EFA) adaptively emphasizes critical spatial and channel features while filtering out redundancies, enhancing the model’s perception and representation capabilities. The Adaptive Hierarchical Feature Aggregation Module (AHFA) module optimizes feature fusion within the decoder, maintaining essential information and improving reconstruction fidelity through hierarchical processing. Quantitative and qualitative experiments on four public datasets demonstrate the effectiveness of EHANet in achieving superior mIoU, mDice, and edge accuracy against eight other state-of-the-art segmentation methods, highlighting its robustness and precision in diverse clinical scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.