ObjectiveThis work aims to synthesize a real-like missing MRI modality using multiple modalities those already obtained, thus providing more abundant diagnostic information, and promoting the improvement of some downstream tasks, such as segmentation and diagnosis. MethodsWith an adversarial network modelling the nonlinear mapping between the inputs and the output, our proposed LR-cGAN extracts the inherent latent representations from different MRI modalities with N collaboratively trained encoders, and fuses them by a latent space processing network (LSPN) composed of several residual blocks. Apart from L1 loss, the image gradient difference loss (GDL) is considered additionally as the objective function to alleviate the problem of insufficient image sharpening. To validate the effectiveness of LR-cGAN, corresponding experiments were evaluated by peak SNR (PSNR), structural similarity index (SSIM) and normalized root-mean-square error (NRMSE) on BRATS 2015 dataset. ResultsCompared to single-modality input, two-modality input improves the synthesis results by 1.196 dB PSNR, 0.019 SSIM and 0.04 NRMSE. With more inputs added, the synthesis performance exhibits an increasing trend. Once any key component, that is, LSPN, GDL loss or adversarial loss, is removed, the quality of the results will reduce to a lower level, proving their contributions to our model. Meanwhile, the final performance of our LR-cGAN network outperforms REPLICA, M-GAN, MILR and sGAN in all metrics on different synthesis tasks, demonstrating its superiority. ConclusionOur proposed LR-cGAN has the flexible ability of receiving multiple modalities and generating realistic images compared to real modality images, having the potential to supplement diagnostic information in clinical.