Abstract

This study attempts to address the issue that present cross-modal image synthesis algorithms do not capture the spatial and structural information of human tissues effectively. As a consequence, the resulting photos include flaws including fuzzy edges and a poor signal-to-noise ratio. The authors offer a cross-sectional technique that combines residual modules with generative adversarial networks. The approach incorporates an enhanced residual initial module and attention mechanism into the generator network, reducing the number of parameters and improving the generator's feature learning capabilities. To boost discriminant performance, the discriminator employs a multiscale discriminator. A multilevel structural similarity loss is included in the loss function to improve picture contrast preservation. On the ADNI data set, the algorithm is compared to the mainstream algorithms. The experimental findings reveal that the synthetic PET image's MAE index has dropped while the SSIM and PSNR indexes have improved. The experimental findings suggest that the proposed model may maintain picture structural information while improving image quality in both visual and objective measures. The residue initial module and attention mechanism are employed to increase the generator's capacity for learning, while the multiscale discriminator is utilized to improve the model's discriminative performance. The enhanced method in this study can maintain the structure and contrast information of the picture, according to comparative experimental findings using the ADNI dataset. The produced picture is hence more aesthetically similar to the genuine print.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call