Magnetic Resonance Imaging (MRI) is a crucial tool for quantitative image analysis and clinical diagnosis, providing detailed anatomical images to assist in the detection of various abnormalities. However, the widespread use of MRI is hindered by the challenges associated with long sampling periods and low-resolution image processing, prompting the development of various traditional methods to address these limitations. In recent times, advanced Deep Learning (DL) techniques have been applied to tackle the inverse problem of reconstructing MRI images from undersampled data. These DL models have demonstrated substantial improvements in terms of image reconstruction performance, cost-effectiveness, and reduced acquisition time, offering significant potential for further enhancements in this domain. This study introduces a novel DLGAN (Deep Learning Generative Adversarial Network) model comprising two sub-GAN modules tailored for specific datasets. Each module incorporates DLGAN (Generator Block, GB) and DLGAN (Discriminator Block, DB) blocks, strategically designed to regenerate MR images from K Space data. These blocks effectively leverage information from both the ground truth and KSpace characteristics, resulting in enhanced image reconstruction performance while also reducing complexity and improving overall efficiency. The DLGAN model effectively addresses common issues such as artefact removal and vanishing-gradient problems through the extraction of hierarchical features. To evaluate the model’s effectiveness, comprehensive experiments were conducted, utilizing metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The experimental findings demonstrate that the proposed DLGAN model outperforms the most recent designs in MRI image reconstruction, showcasing its potential for significant advancements in clinical diagnosis and quantitative image analysis.