Abstract

In the rapidly evolving domain of medical imaging, there's an increasing interest in harnessing deep learning models for enhanced diagnosis and prognosis. Among these, the Variational Autoencoder (VAE) and the Diffusion model stand out for their potential in generating synthetic lung cancer images. This research article delves into a comparative analysis of both models, focusing on their application in lung cancer imaging. Drawing from the "Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) lung cancer dataset," the study investigates the efficiency, accuracy, and fidelity of the images generated by each model. The findings suggest that while the VAE model offers faster image generation, its output is notably blurrier than its counterpart. Conversely, the Diffusion model, despite its relatively slower speed, is capable of producing highly detailed synthetic images even with limited epochs. This comprehensive comparison not only highlights the strengths and shortcomings of each model but also lays the groundwork for further refinements and potential clinical implementations. The broader objective is to catalyze advancements in lung cancer diagnosis, ultimately leading to better patient outcomes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call