Abstract

A key challenge in maximizing the effectiveness of model-based design of experiments for calibrating nonlinear process models is the inaccurate prediction of information that is afforded by each new experiment. We present a novel methodology to exploit prior probability distributions of model parameter estimates in a bi-objective optimization formulation, where a conditional-value-at-risk criterion is considered alongside an average information criterion. We implement a tractable numerical approach that discretizes the experimental design space and leverages the concept of continuous-effort experimental designs in a convex optimization formulation. We demonstrate effectiveness and tractability through three case studies, including the design of dynamic experiments. In one case, the Pareto frontier comprises experimental campaigns that significantly increase the information content in the worst-case scenarios. In another case, the same campaign is proven to be optimal irrespective of the risk attitude. An open-source implementation of the methodology is made available in the Python software Pydex.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call