Abstract

Background and Objective:Mainstream image synthesis methods fail to capture local contextual information long-range dependence, and adaptability, especially under the influence of computation overload and random noise distribution, leading to the loss of contrast and detailed information. Approach:To alleviate these issues, we propose a Generative Adversarial Network aggregating large kernel decomposable attention (LKDA-GAN) bottleneck block for cross-modality image synthesis. Initially, a novel LKDA module is proposed by combining a spatial local convolution, a spatial long-range convolution, and a channel convolution, which aims to balance the local contextual information and the long-range dependence, and enhance the channel adaptability by enlarging the receptive field. Subsequently, a bottleneck block is designed and integrated into the LKDA module by feature dimensional transformation to capture rich semantic information and alleviate computational overhead. Ultimately, an auxiliary registration network (ARN) based on the noise transition matrix is put forward to learn the prior knowledge of the noise distribution and produce the unique optimal solution. Furthermore, it is based on Res-UNet to avoid network degradation. Main results:By analyzing qualitative assessment and quantitative measurement, extensive experiments demonstrate the proposed approach outperforms the state-of-the-art methods, and ablation studies also show the superiority of LKDA. Significance:LKDA-GAN provides a suitable way to synthesize images between different modalities, which is conductive to indicate disease areas and improve diagnosis accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call