Abstract

In recent years, the application of deep learning techniques to medical image analysis has shown promising results in improving diagnosis and treatment of diseases. One such technique is CycleGAN, a variant of Generative Adversarial Networks (GANs) that enables unpaired image-to-image translation. This paper presents a CycleGAN-based approach for transforming CT and MRI scans, which can provide doctors with more diagnostic information and assist in the prediction and diagnosis of tumors. Our experiments are based on brain scan images collected from the Kaggle dataset, with no paired information available. The generator and discriminator models of the CycleGAN are trained with the Adam optimizer and a cycle consistency loss weight (λ) of 10. The total training time is about 12 days, and the model is tested for 75 epochs with a fixed learning rate of 0.0002. The results demonstrate the effectiveness of the proposed method, achieving high-quality image translation from MRI to CT scans. The advantages of CycleGAN in medical image analysis include its ability to handle unpaired data, perform cross-domain image translation, ensure cycle consistency, and generate diverse outputs. Future work can further explore the use of CycleGAN for other medical image analysis tasks and investigate how to optimize the model performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call