The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data. This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned trainingdata. The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translationmethods. The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type. The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected to generate significant application value in this scenario compared to existingmethods.