Abstract

In an attempt to reduce waiting time and waiting list mortality, decision making in liver transplantation in recent years has been largely dominated by two strategies: prioritization of the sickest patients for transplantation using the model for end-stage liver disease (MELD) score [1] and increased use of high-risk donor livers. The combination of these has, however, frequently created the difficult situation of a high-risk donor liver for a high-risk patient, leading to expected poor outcome and the possibility of a futile transplant. Furthermore, waiting list mortality must be balanced against survival outcome after transplantation to optimize utility and justice in organ allocation and resource utilization. Hence, different models have been developed to predict the outcome of transplantation. Models based either on donor factors alone (for example donor risk index, DRI) [2] or on recipient factors alone (for example MELD score) [1] are not predictive of survival after liver transplantation. Others that incorporated both donor and recipient factors have achieved better accuracy. The survival outcomes following liver transplantation (SOFT) score [3] is a complex list of 18 factors and the D-MELD [4] simply combines two simple dominant factors for donors and recipients. More recently, the balance of risk (BAR) score [5], which includes six variables (donor age, recipient age, recipient MELD score, retransplantation, pretransplant life support, and cold ischemic time) has been developed using the United Network for Organ Sharing (UNOS) database of the United States of America and validated in a European center. In this issue of the journal, Campos Junior and colleagues attempt to validate the BAR score as a predictor of survival outcome for a cohort of 402 patients after liver transplantation in a single center in Brazil [6]. Although the BAR score with a cutoff of 11 has prognostic significance for 3 and 12-month survival, its predictive accuracy is low, as indicated by an area under the receiver operating curve (AUROC) of only 0.65 for this Brazilian population. In fact, multiple regression analysis for the prognostic factors of three and twelve-month survival does not identify any of the six variables incorporated in the BAR score as of prognostic significance. Several differences between this study population and the UNOS database may account for the poor prognostic value of the BAR score for this cohort of patients. First, the recipients from Brazil were younger (median age 48.8 vs 54 years) and had a higher MELD score (median of 20 vs 18). Second, the donors were also younger (median age of 35.6 vs 43 years) and had a higher DRI (median 1.68 vs 1.38). Third, cold ischemic time was much longer (median 10 vs 7 h). Hence, there seems to be marked differences for at least four of six of the BAR score associated variables between the Brazilian cohort and the UNOS database [5]. Fourth, and more important, survival outcome for this Brazilian cohort was much worse, with three-month survival of 77 % for those with a BAR score below 11 and 46 % for those with a score greater than or equal to 11. Although the authors did not report three-month survival for the entire cohort, the figure should be at or below 70 % and was much lower than that from the UNOS database (93.2 %). Although the different donor and recipient characteristics, especially the higher MELD score and DRI, might have contributed to the inferior outcome, the possibility of a surgical or center factor cannot be excluded. In fact, the authors identify an operative factor, transfusion of C.-M. Lo (&) Department of Surgery, Queen Mary Hospital, The University of Hong Kong, Pokfulam, Hong Kong, China e-mail: chungmlo@hku.hk

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call