AbstractComputed tomography (CT) and magnetic resonance imaging (MRI) are widely utilized modalities for primary clinical imaging, providing crucial anatomical and pathological information for diagnosis. CT measures X‐ray attenuation, while MRI captures hydrogen atom density in tissues. Despite their distinct imaging physics principles, the signals obtained from both modalities when imaging the same subject can be represented by modality‐specific parameters and common latent variables related to anatomy and pathology. This paper proposes an adversarial learning approach using deep convolutional neural networks to disentangle these factors. This disentanglement allows us to simulate one modality from the other. Experimental results demonstrate our ability to generate synthetic CT images from MRI inputs using the Gold‐atlas dataset, which consists of paired CT‐MRI volumes. Patch‐based learning techniques and a visual Turing test are employed to model discriminator losses. Our approach achieves a mean absolute error of 36.81 4.46 HU, peak signal to noise ratio of 26.12 0.31 dB, and structural similarity measure of 0.9 0.02. Notably, the synthetic CT images accurately represent bones, gaseous cavities, and soft tissue textures, which can be challenging to visualize in MRI. The proposed model operates at an inference compute cost of 430.68 GFlops/voxel. This method can minimize radiation exposure by reducing the need for pre‐operative CT scans, providing an MR‐only alternative in clinical settings.