Abstract

The accuracy of medical image segmentation is crucial for diagnosis and treatment planning in the modern healthcare system. Deep learning methods, like CNNs, UNETs, and Transformers, have completely changed this industry by automating labor-intensive manual segmentation procedures that were previously done by hand. However, problems like complex architectures and blurry characteristics continue, which causes issues with accuracy. Researchers are working hard to overcome these obstacles to fully realize the potential of medical image segmentation in the revolution of healthcare. Our paper presents an enhanced U-Net model specifically designed for brain tumour MRI image segmentation to improve precision. There are three primary components to our strategy. First, we prioritize feature augmentation using methods like CLAHE in the picture preprocessing phase. Second, we modify the U-Net model's architecture with an emphasis on a customized layered design in order to improve segmentation outcomes. Finally, we use a CNN model for post-processing to further optimize segmentation results using further convolutional layers. A total of 3,064 brain MRI pictures were used to test (612 images), validate (612 images), and train (1,840 images) our model. We obtained exceptional recall (93.66%), accuracy (97.79%), F-score (93.15%), and precision (92.66%). The Dice coefficient's training and validation curves showed little variation, with training reaching roughly 93% and validation 84%, suggesting good generalization ability. High accuracy was validated by visual review of the segmentation findings, albeit occasionally little mistakes like false positives were noticed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.