Abstract
Manual identification of brain tumors in Magnetic Resonance (MR) images is laborious, time-consuming, and human error-prone. Automatic segmentation of brain tumors from MR images aims to bridge the gap. U-Net, a deep learning model, has delivered promising results in generating brain tumor segments. However, the model tends to over-segment the tumor volume than required. It will have a significant impact on deploying the model for practical use. In this work, the baseline U-Net model has been studied with the addition of residual, multi-resolution, dual attention, and deep supervision blocks. The goal of residual blocks is to efficiently extract features to reduce the semantic gap between low-level features from the decoder and high-level features from skip connections. The multiple resolution blocks have been added to extract features and analyze tumors of varying scales. The dual attention mechanism has been incorporated to highlight tumor representations and reduce over-segmentation. Finally, the deep supervision blocks have been added to utilize features from various decoder layers to obtain the target segmentation. The design of the proposed model has been justified with several experiments and ablation studies. The proposed model has been trained and evaluated on the BraTS2020 training and validation datasets. On the validation data, the proposed model has achieved a dice score of 0.60, 0.75, 0.62 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, and a Hausdorff 95 score of 46.84, 11.05, and 22.5, respectively. Compared to the baseline U-Net, the proposed model has outperformed WT and TC volumes in the Hausdorff 95 distance metric except for the ET volume.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.