Abstract

We present a Bayesian framework for model-based optimal sensor placement. Our interest lies in minimizing the uncertainty on predictions of a particular response quantity of interest, with parameter estimation being an intermediate step for this purpose. By developing a methodology that targets prediction inference rather than parameter inference, we prioritize reduction of uncertainty on the parameters that matter most for the prediction of the actual quantity of interest. Currently available optimal sensor placement methods focus on parameter inference rather than prediction inference and might therefore yield suboptimal solutions for prediction inference. We opt for a unifying framework where the case of parameter inference is merely a special case of prediction inference. Following the Bayesian framework for uncertainty quantification, the model parameters are treated as random variables and their uncertainty before data collection is described by a prior probability density function. The prior uncertainty is updated to the posterior uncertainty using measured data that depends on the chosen sensor locations. This posterior parameter uncertainty is then converted to the posterior prediction uncertainty. As a scalar measure of uncertainty, we use the determinant of the posterior prediction covariance matrix. This is a general type of metric which can be used for both prediction and parameter inference. Using the expectation of this determinant with respect to the distribution of possible data as the objective function, the sensor locations are optimized to minimize the expected parameter or prediction uncertainty. The required covariance matrices of parameters and predictions are evaluated using a Monte Carlo sampling approach. We verify this procedure for a simple test example and present a (simplified) case study from structural dynamics where sensor locations in a modal test are optimized for parameter and prediction inference. We show how the optimal locations for prediction uncertainty differ from those obtained by minimizing parameter uncertainty. In general, the difference will depend on the prior parameter uncertainties, the way the experimental data depend on the parameters, and the way the predictions depend on the parameters. Significant differences will occur when the data as well as the predictions are local in nature and optimizing for prediction inference allows adapting the data such that they are most informative for the relevant parameter subset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call