Abstract
Medical image synthesis plays an important role in clinical diagnosis by providing auxiliary pathological information. However, previous methods usually utilize the one-step strategy designed for wild image synthesis, which are not sensitive to local details of tissues within medical images. In addition, these methods consume a great number of computing resources in generating medical images, which seriously limits their applicability in clinical diagnosis. To address the above issues, a Light and Effective Generative Adversarial Network (LEGAN) is proposed to generate high-fidelity medical images in a lightweight manner. In particular, a coarse-to-fine paradigm is designed to imitate the painting process of humans for medical image synthesis within a two-stage generative adversarial network, which guarantees the sensitivity to local information of medical images. Furthermore, a low-rank convolutional layer is introduced to construct LEGAN for lightweight medical image synthesis, which utilizes principal components of full-rank convolutional kernels to reduce model redundancy. Additionally, a multi-stage mutual information distillation is devised to maximize dependencies of distributions between generated and real medical images in model training. Finally, extensive experiments are conducted in two typical tasks, i.e., retinal fundus image synthesis and proton density weighted MR image synthesis. The results demonstrate that LEGAN outperforms the comparison methods by a significant margin in terms of Fréchet inception distance (FID) and Number of parameters (NoP).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.