ABSTRACT Brain tumors present a formidable diagnostic challenge due to their aberrant cell growth. Accurate determination of tumor location and size is paramount for effective diagnosis. Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) are pivotal tools in clinical diagnosis, yet tumor segmentation within their images remains challenging, particularly at boundary pixels, owing to limited sensitivity. Recent endeavors have introduced fusion-based strategies to refine segmentation accuracy, yet these methods often prove inadequate. In response, we introduce the Parallel-Way framework to surmount these obstacles. Our approach integrates MRI and PET data for a holistic analysis. Initially, we enhance image quality by employing noise reduction, bias field correction, and adaptive thresholding, leveraging Improved Kalman Filter (IKF), Expectation Maximization (EM), and Improved Vibe Algorithm (IVib), respectively. Subsequently, we conduct multi-modality image fusion through the Dual-Tree Complex Wavelet Transform (DTWCT) to amalgamate data from both modalities. Following fusion, we extract pertinent features using the Advanced Capsule Network (ACN) and reduce feature dimensionality via Multi-objective Diverse Evolution-based selection. Tumor segmentation is then executed utilizing the Twin Vision Transformer with dual attention mechanism. Implemented our Parallel-Way framework which exhibits heightened model performance. Evaluation across multiple metrics, including accuracy, sensitivity, specificity, F1-Score, and AUC, underscores its superiority over existing methodologies.
Read full abstract