Objective Develop a stakeholder-informed ethical framework to provide practical guidance to health systems considering implementation of suicide risk prediction models. Methods In this multi-method study, patients and family members participating in formative focus groups (n = 4 focus groups, 23 participants), patient advisors, and a bioethics consultant collectively informed the development of a web-based survey; survey results (n = 1,357 respondents) and themes from interviews with stakeholders (patients, health system administrators, clinicians, suicide risk model developers, and a bioethicist) were used to draft the ethical framework. Results Clinical, ethical, operational, and technical issues reiterated by multiple stakeholder groups and corresponding questions for risk prediction model adopters to consider prior to and during suicide risk model implementation are organized within six ethical principles in the resulting stakeholder-informed framework. Key themes include: patients’ rights to informed consent and choice to conceal or reveal risk (autonomy); appropriate application of risk models, data and model limitations and consequences including ambiguous risk predictors in opaque models (explainability); selecting actionable risk thresholds (beneficence, distributive justice); access to risk information and stigma (privacy); unanticipated harms (non-maleficence); and planning for expertise and resources to continuously audit models, monitor harms, and redress grievances (stewardship). Conclusions Enthusiasm for risk prediction in the context of suicide is understandable given the escalating suicide rate in the U.S. Attention to ethical and practical concerns in advance of automated suicide risk prediction model implementation may help avoid unnecessary harms that could thwart the promise of this innovation in suicide prevention. HIGHLIGHTS Patients’ desire to consent/opt out of suicide risk prediction models. Recursive ethical questioning should occur throughout risk model implementation. Risk modeling resources are needed to continuously audit models and monitor harms.