A disruption prediction algorithm, called disruption prediction using random forests (DPRF), has run in real-time in the DIII-D plasma control system (PCS) for more than 900 discharges. DPRF naturally provides a probability mapping associated with its predictions, i.e. the disruptivity signal, now incorporated in the DIII-D PCS. This paper discusses disruption prediction accomplishments in terms of shot-by-shot performances, by simulating alarms on each discharge as in the PCS framework. Depending on the optimised performance metric chosen to evaluate DPRF, we find that almost all disruptive discharges are detected on average with a few hundred milliseconds of warning time, but this comes at a high cost of false alarms produced. Performances do not satisfy ITER requirements, where the success rate has to be higher than 95%, but this is not completely unexpected. DPRF is trained on many years of major disruptions occurring during the flattop phase of the plasma current in DIII-D, but without any differentiation by cause. Furthermore, we find that DPRF is affected by a relatively high fraction of false alarms occurring during the first 500 milliseconds from the flattop onset. This subtle effect, more evident on discharges where DPRF is run in real-time, can be marginalised by taking specific precautions on the validity range of the predictions, and performances do improve. Even if presently burdened by some limitations, DPRF provides an incredible and novel advantage. Thanks to the feature contribution analysis (e.g. the identification of which signals contributed to triggering an alarm), it is possible to interpret and explain DPRF predictions. It is the first time that such interpretability features are exploited by a disruption predictor: by uncovering the causes of the disruption events, a better understanding of disruption dynamics is achieved, and a clear path toward the design of disruption avoidance strategies can be provided.