Prediction of major arrhythmic events (MAEs) in dilated cardiomyopathy represents an unmet clinical goal. Computational models and artificial intelligence (AI) are new technological tools that could offer a significant improvement in our ability to predict MAEs. In this proof-of-concept study, we propose a deep learning (DL)-based model, which we termed Deep ARrhythmic Prevention in dilated cardiomyopathy (DARP-D), built using multidimensional cardiac magnetic resonance data (cine videos and hypervideos and LGE images and hyperimages) and clinical covariates, aimed at predicting and tracking an individual patient's risk curve of MAEs (including sudden cardiac death, cardiac arrest due to ventricular fibrillation, sustained ventricular tachycardia lasting ≥30 s or causing haemodynamic collapse in <30 s, appropriate implantable cardiac defibrillator intervention) over time. The model was trained and validated in 70% of a sample of 154 patients with dilated cardiomyopathy and tested in the remaining 30%. DARP-D achieved a 95% CI in Harrell's C concordance indices of 0.12-0.68 on the test set. We demonstrate that our DL approach is feasible and represents a novelty in the field of arrhythmic risk prediction in dilated cardiomyopathy, able to analyze cardiac motion, tissue characteristics, and baseline covariates to predict an individual patient's risk curve of major arrhythmic events. However, the low number of patients, MAEs and epoch of training make the model a promising prototype but not ready for clinical usage. Further research is needed to improve, stabilize and validate the performance of the DARP-D to convert it from an AI experiment to a daily used tool.