Abstract
AbstractPrecise localization and volumetric segmentation of glioblastoma before and after surgery are crucial for various clinical purposes, including post‐surgery treatment planning, monitoring tumour recurrence, and creating radiotherapy maps. Manual delineation is time‐consuming and prone to errors, hence the adoption of automated 3D quantification methods using deep learning algorithms from MRI scans in recent times. However, automated segmentation often leads to over‐segmentation or under‐segmentation of tumour regions. Introducing an interactive deep‐learning tool would empower radiologists to rectify these inaccuracies by adjusting the over‐segmented and under‐segmented voxels as needed. This paper proposes a network named Atten‐SEVNETR, that has a combined architecture of vision transformers and convolutional neural networks (CNN). This hybrid architecture helps to learn the input volume representation in sequences and focuses on the global multi‐scale information. An interactive graphical user interface is also developed where the initial 3D segmentation of glioblastoma can be interactively corrected to remove falsely detected spurious tumour regions. Atten‐SEVNETR is trained on BraTS training dataset and tested on BraTS validation dataset and on Uppsala University post‐operative glioblastoma dataset. The methodology outperformed state‐of‐the‐art networks like nnFormer, SwinUNet, and SwinUNETR. The mean dice score achieved is 0.7302, and the mean Hausdorff distance‐95 got is 7.78 mm for the Uppsala University dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.