Survival analysis plays a central role in diverse research fields, especially in health sciences. As an analytical tool, it can be used to help improve patients’ survival time, or at least, reduce the prospects of recurrence in cancer studies. However, approaches to the predictive performance of the current survival models mainly center on clinical data along with the classical survival methods. For censored “omics” data, the performance of survival models has not been thoroughly studied, either often due to their high dimensionality issues or reliance on binarizing the survival time for classification analysis. We aim to present a neural benchmark approach that analyzes and compares a broad range of classical and state-of-the-art machine learning survival models for “omics” and clinical datasets. All the methods considered in our study are evaluated using predictability as a performance measure. The study is systematically designed to make 36 comparisons (9 methods over 4 datasets, i.e., 2 clinical and 2 omics), and shows that, in practice, predictability of survival models does vary across real-world datasets, model choice, as well as the evaluation metric. From our results, we emphasize that performance criteria can play a key role in a balanced assessment of diverse survival models. Moreover, the Multitask Logistic Regression (MTLR) showed remarkable predictability for almost all the datasets. We believe this outstanding performance presents a unique opportunity for a wider use of MTLR for survival risk factors. For translational clinicians and scientists, we hope our findings provide practical guidance for benchmark studies of survival models, as well as highlight potential areas of research interest.