In the treatment of brain tumors, accurate diagnosis and treatment heavily rely on reliable brain tumor segmentation, where multimodal Magnetic Resonance Imaging (MRI) plays a pivotal role by providing valuable complementary information. This integration significantly enhances the performance of brain tumor segmentation. However, due to the uneven grayscale distribution, irregular shapes, and significant size variations in brain tumor images, this task remains highly challenging. In order to overcome these obstacles, we have introduced a novel framework for automated segmentation of brain tumors that leverages the diverse information from multi-modal MRI scans. Our proposed method is named BTSegDiff and it is based on a Diffusion Probability Model (DPM). First, we designed a dynamic conditional guidance module consisting of an encoder. This encoder is used to extract information from multimodal MRI images and guide the DPM in generating accurate and realistic segmentation masks. During the guidance process, we need to fuse the diffused generated features with the extracted multimodal features. However, diffusion process itself introduces a significant amount of Gaussian noise, which can affect the fusion results. Therefore, we designed a Fourier domain feature fusion module to transfer this fusion process to Euclidean space and reduce the impact of high-frequency noise on fusion. Lastly, we have taken into account that the DPM, as a generative model, produces non-unique results with each sampling. In the meticulous field of medicine, this is highly detrimental. Therefore, we have designed a Stepwise Uncertainty Sampling module based on Monte Carlo uncertainty calculation to generate unique outcomes and enhance segmentation accuracy simultaneously. To validate the effectiveness of our approach, we perform a validation on the popular BraTs2020 and BraTS2021 benchmarks. The experimental results show that our method outperforms many existing brain tumor segmentation methods. Our code is available at https://github.com/jaceqin/BTSegDiff.
Read full abstract