Abstract

Three-dimensional images are frequently used in medical imaging research for classification, segmentation, and detection. However, the limited availability of 3D images hinders research progress due to network training difficulties. Generative methods have been proposed to create medical images using AI techniques. Nevertheless, 2D approaches have difficulty dealing with 3D anatomical structures, which can result in discontinuities between slices. To mitigate these discontinuities, several 3D generative networks have been proposed. However, the scarcity of available 3D images makes training these networks with limited samples inadequate for producing high-fidelity 3D images. We propose a data-guided generative adversarial network to provide high fidelity in 3D image generation. The generator creates fake images with noise using reference code obtained by extracting features from real images. The generator also creates decoded images using reference code without noise. These decoded images are compared to the real images to evaluate fidelity in the reference code. This generation process can create high-fidelity 3D images from only a small amount of real training data. Additionally, our method employs three types of discriminator: volume (evaluates all the slices), slab (evaluates a set of consecutive slices), and slice (evaluates randomly selected slices). The proposed discriminator enhances fidelity by differentiating between real and fake images based on detailed characteristics. Results from our method are compared with existing methods by using quantitative analysis such as Fréchet inception distance and maximum mean discrepancy. The results demonstrate that our method produces more realistic 3D images than existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call