In many countries, reimbursement decisions are informed by estimates of the relative cost-effectiveness of alternative technologies, with costs and health outcomes projected over the lifetime of patients. Most cost-effectiveness analyses (CEAs) use decision models populated with evidence from randomized controlled trials (RCTs) that do not follow all patients until death. In the presence of such censored data, analyses of clinical effectiveness tend to use Cox proportional hazards regression models and avoid specifying a baseline hazard. This approach is insufficient for CEA, which must predict expected survival over time, a process that often requires extrapolation beyond the RCT follow-up data. There are many options for conducting such extrapolations and great uncertainty as to the best choice of approach; consequently, gaming by manufacturers is a potential concern. A standard approach is to apply parametric survival functions to the individual patient data (IPD) from RCTs to predict mean survival by treatment arm, over the lifetime. Davies and others illustrate just how important and uncertain model selection for extrapolation can be, even in the simple setting where IPD are available to estimate relative effectiveness for the 2 comparators of interest. Davies and others updated a CEA of 2 alternative prostheses for total hip replacement using 16 years of follow-up data, rather than the 8 years that were available at the time the previous study was conducted, and found that the original conclusion was overturned. How critical should we be of the previous CEAs, which may have provided the best recommendation given the evidence available at that time? Who is to say that the current results, which still do not follow all patients for their lifetime, represent the truth? Rational decision making requires extrapolation approaches that are grounded in objective criteria and make efficient use of all evidence available at the time of decision. Recent reviews of technology appraisals for the National Institute for Health and Care Excellence (NICE) have shown that studies used extrapolation approaches that were inadequate or inadequately described. Latimer’s paper, which summarizes a longer report, proposes an algorithm to help future analysts improve their approach to extrapolation. The algorithm is limited to CEAs that compare 2 alternatives in which IPD are available from the pivotal RCT and there is no treatment switching. Within this context, there are 3 main suggestions: first, select the range of candidate models from a description of the occurrence of events over time, specifically by plotting the log of the cumulative hazards against the log of time; second, fit the chosen range of models to the entire RCT dataset and assess their relative fit to the observed data; and third, consider the relative plausibility of the predictions in the unobserved period. The development of the algorithm is to be welcomed, given the current state of both applied and methodological work in this area. We believe that the algorithm can improve the accuracy of extrapolations that are undertaken, not least by encouraging analysts to report a broader range of approaches than is current practice (in effect, report a structural sensitivity analysis). That said, we think further consideration From Department of Health Services Research and Policy, London School of Hygiene and Tropical Medicine, London, UK (RG, MP); and Oxford Outcomes, Oxford, UK (NH).