Abstract

Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze brain tumour images (BT) to understand the disease's progress better. It is well-known that training 3D-CNN is computationally expensive and has the potential of overfitting due to the small sample size available in the medical imaging field. Here, we proposed a novel 2D-3D approach by converting a 2D brain image to a 3D fused image using a gradient of the image Learnable Weighted. By the 2D-to-3D conversion, the proposed model can easily forward the fused 3D image through a pre-trained 3D model while achieving better performance over different 3D baselines. We used VGG16 for feature extraction in the implementation as it outperformed other 3D CNN backbones. We further showed that the weights of the slices are location-dependent, and the model performance relies on the 3D-to-2D fusion view, with the best outcomes from the coronal view. With the new approach, we increased the accuracy to 0.88, compared with conventional 3D CNNs, for classifying brain tumour images. The novel 2D-3D model may have profound implications for future timely BT diagnosis in clinical settings.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.