Abstract
3D Magnetic Resonance Imaging (3D-MRI) analysis of brain tumours is an important tool for gathering information needed for diagnosis and disease therapy planning. However, during the brain tumor segmentation process existing techniques have segmentation error while identifying tumor location and extended tumor regions due to improper extraction of initial contour points and overlapping tissue intensity distributions. Hence a novel Duo-step optimised Pyramidal SegNet has been proposed in which multiscale contrast convolutional attention module improve contrast and the tumor edge has been extracted based on location and tumor extension using Duo-step darning needle optimisation that set initial contour points and pyramidal level set segmentation with ancillary Sobel edge operator extract the tumour region from all 2D MRI image slices without having overlapped tissue intensity distributions thereby effectively minimises segmentation error. Furthermore, during the classification of segmented tumor region based on its type, irregular planimetric volume and low interrater concordance of multivariate brain tumors reduce the detection rate due to neglecting the extraction of contextual and symmetric features. Hence 3D brain Unified NN has been proposed in which adaptive multi-layer deep unified encoder module extract 3D contextual and symmetric features by measuring the difference from the observed region and contralateral region and the multivariate brain tumors are classified with boosted Sparse Categorical Cross entropy loss calculation to demonstrate high detection rate. The results obtained for the BraTS2020 and Brain Tumor Detection 2020 data sets showed that the proposed model outperforms existing techniques with excellent precision of 97%, 97.5%, recall of 99%, 97.8%, and accuracy of 95.7%, 98.4%, respectively.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have