Accurate longitudinal risk prediction for DCI (delayed cerebral ischemia) occurrence after subarachnoid hemorrhage (SAH) is essential for clinicians to administer appropriate and timely diagnostics, thereby improving treatment planning and outcome. This study aimed to develop an improved longitudinal DCI prediction model and evaluate its performance in predicting DCI between day 4 and 14 after aneurysm rupture. Two DCI classification models were trained: (1) a static model based on routinely collected demographics and SAH grading scores and (2) a dynamic model based on results from laboratory and blood gas analysis anchored at the time of DCI. A combined model was derived from these two using a voting approach. Multiple classifiers, including Logistic Regression, Support Vector Machines, Random Forests, Histogram-based Gradient Boosting, and Extremely Randomized Trees, were evaluated through cross-validation using anchored data. A leave-one-out simulation was then performed on the best-performing models to evaluate their longitudinal performance using time-dependent Receiver Operating Characteristic (ROC) analysis. The training dataset included 218 patients, with 89 of them developing DCI (41%). In the anchored ROC analysis, the combined model achieved a ROC AUC of 0.73 ± 0.05 in predicting DCI onset, the static and the dynamic model achieved a ROC AUC of 0.69 ± 0.08 and 0.66 ± 0.08, respectively. In the leave-one-out simulation experiments, the dynamic and voting model showed a highly dynamic risk score (intra-patient score range was 0.25 [0.24, 0.49] and 0.17 [0.12, 0.25] for the dynamic and the voting model, respectively, for DCI occurrence over the course of disease. In the time-dependent ROC analysis, the dynamic model performed best until day 5.4, and afterwards the voting model showed the best performance. A machine learning model for longitudinal DCI risk assessment was developed comprising a static and a dynamic sub-model. The longitudinal performance evaluation highlighted substantial time dependence in model performance, underscoring the need for a longitudinal assessment of prediction models in intensive care settings. Moreover, clinicians need to be aware of these performance variations when performing a risk assessment and weight the different model outputs correspondingly.