Abstract
Medical image segmentation is essential for analyzing medical data, improving diagnostics, treatment planning, and research. However, current methods struggle with different imaging types, poor generalization, and rare structure detection. To address these issues, we propose MedFusion-TransNet, a novel multi-modal fusion approach utilizing transformer-based architectures. By integrating multi-scale feature encoding, attention mechanisms, and dynamic optimization, our method significantly enhances segmentation precision. Our method uses the Context-Aware Segmentation Network (CASNet) and Dynamic Region-Guided Optimization (DRGO) to enhance segmentation by focusing on key anatomical areas. These innovations tackle challenges like imbalanced datasets, boundary delineation, and multi-modal complexity. Validation on benchmark datasets demonstrates substantial improvements in accuracy, robustness, and boundary precision, marking a significant step forward in segmentation technologies. MedFusion-TransNet offers a transformative tool for advancing the quality and reliability of medical image analysis across diverse clinical applications.
Published Version
Join us for a 30 min session where you can share your feedback and ask us any queries you have