As opposed to using many unrelated photographs to depict the same scene, image fusions combine multiple, similar images to generate a single, unified image with greater detail. Imaging sensors and the need for a wideband signal to transmit most source images limit their resolution. This study suggests new methods of fusing medical pictures from different modalities in order to increase image quality and, by extension, the accuracy with which brain tumors can be detected and identified. Improved convolutional neural network (ICNN) and region growth-based [Formula: see text]-means clustering (RKMC) are used in the suggested strategy to boost the quality of brain image fusions obtained from Computed tomography scanned image (CTSI) and magnetic resonance imaging (MRI) in this study. The first stages of this task consist of eliminating noise, segmenting images, extracting and selecting features, and fusing images. AMF (Adaptive Median Filtering) are first used to eliminate noise from MRI images and CTSI of the brain, improving the image quality. With the help of the RKMC algorithm, MRI image and CTSI scans can be segmented into their constituent pieces, which can then be seen either as grayscale images or as pictures of objects. The RKMC algorithm is able to adequately account for the possibility of tumors in white images. More useful image features can be extracted with the use of MPCA (Modified Principal Component Analysis). Afterward, features with the highest fitness values are chosen by using AFO (Adaptive Firefly Optimization). Image fusions of multimodal images are carried out using ICNN, which generates the image’s lower-, middle-, and higher-level contents. Incorporating important and relevant image characteristics from all viewpoints and perspectives improves feature training and testing. The results show that the proposed RKMC+ICNNs outperform the state-of-the-art approaches in terms of accuracy, PSNR, RMSE, and runtime.
Read full abstract