Details related to tiny abnormal pathologies and textures are crucial for clinical experts and computer-aided diagnosis in medical imaging. Super-resolving medical images provide significant support for disease diagnosis. However, due to differences and diversity in tissue appearance or spatial resolution of images caused by the acquisition principles or parameters in various imaging techniques, it limits their applications in multi-modal medical imaging. To this end, we propose a multi-modal adaptive super-resolution algorithm for reconstructing medical images, named MAda-SR, which improves the traditional Adam optimizer into an adaptive optimizer in terms of parameter updates and optimization strategies. Additionally, we enhance the MSE loss by adjusting its weight space, thereby increasing the transfer potential between multimodal tasks and enabling more extended continual learning. With extensive experimental validation, we demonstrate that a single super-resolution model can handle various modalities of medical datasets without compromising performance. The results indicate that our proposed MAda-SR outperforms comparative methods in terms of controlling forgetting and relearning metrics, achieving both stability and plasticity. The source code is available at https://github.com/wuzheng2022/MAda-SR.