Abstract

Segmentation of brain tumors is of great importance for patients in clinical diagnosis and treatment. For this reason, experts try to identify border regions of special importance using multimodal images from magnetic resonance imaging systems. In some images, border regions may be intertwined. As a result, this situation leads experts to make incomplete or wrong decisions. This paper presents DenseUNet+, a new deep learning-based approach to perform segmentation with high accuracy using multimodal images. In the DenseUNet+ model, data from four different modalities were used together in dense block structures. Afterward, linear operations were applied to these data and then the concatenate operation was performed. The results obtained in this way were transferred to the decoder layer. The proposed method was also compared with state-of-the-art (SOTA) studies using the same dataset by using dice and jaccard metrics in the BraTS2021 and FeTS2021 datasets. As a result of the comparison, dice and jaccard evaluation metrics for the BraTS2021 dataset were 95% and 88%, respectively, and 86% and 87% performance values were obtained for FeTS2021, respectively. It has been determined that the performance results are better than many SOTA brain tumor segmentation methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call