Consequential life cycle assessment (CLCA) studies how a system responds to a decision in question. There has been a growing body of CLCA studies in the last decade, with different models being incorporated from other fields, partly to compensate for the limitations of the conventional linear models used in LCA. As much as we welcome the use of new models in (C)LCA, here we provide a cautionary note on this trend by highlighting the restrictiveness of assumptions underpinning different models. And we point to a path forward for future CLCA studies. We review the model setup of, and major assumptions behind, two classes of models used in CLCA studies. One is linear models such as process- or input-output-based LCA, which have been conventionally used in LCA. And the other is nonlinear optimization models such as computable general equilibrium (CGE), which are increasingly being applied in CLCA studies. While the linear models rest on several assumptions such as fixed coefficients and unlimited supply of inputs, so do the nonlinear optimization models. Among others, CGE models assume rationality, the limitations of which have been increasingly revealed by findings of experimental and behavioral economics. We also discuss some of the foundational questions. Are LCA estimates verifiable or falsifiable? If not, then is LCA science? And is the traditional definition of science based on falsifiability suited for LCA and other disciplines studying complex systems? Considering that (1) LCA studies the complex human environment system and model estimates or predictions are largely unverifiable and (2) different classes of models have different strengths and limitations, we make the following recommendations. For decision makers, particularly policy makers, we recommend evaluating estimates from different classes of models, as opposed to relying on a single class, for more robust decision support. Each model estimate or prediction can be taken as a point of evidence. If most estimates point to the same direction, the results would be considered strong evidence of what would happen. If, on the other hand, model estimates are scattered with no obvious patterns, the results would be considered inconclusive and thus more research is needed. For modelers, we recommend efforts be put into improving a model’s predictive capability by, e.g., relaxing some of the unrealistic assumptions such as fixed input/output coefficients, 1:1 perfect displacement, and systemic optimization. Our main message is that mathematical sophistication does not necessarily equal improvement in model accuracy. Given the complexity of the human - environment system, the uncertainties of predicting the future, and the limitations of different models, a multi-model approach is entailed for more robust decision-making, and continuous effort is needed to improve model predictability.
Read full abstract