Abstract

An engineering company manufacturing high‐precision sensors had accumulated huge historical databases of information on a type of sensors, which had been tested. The aim of the company was not to use this historical data to improve estimation of future individual sensor parameters, but rather to use it to reduce the number of measurements needed per sensor, guaranteeing a required level of accuracy. In the paper, we show how this can be performed, using Bayesian ideas, and introduce the novel theory for linear regression models, which determines how the reduction in individual sensor measurements can be achieved.Specifically, for estimating parameters of closely related sensors, an estimate can be thought of as comprising a global component, that is, the mean of all the sensors, and a local component, which is a shift from the mean. The historical data can, in a Bayesian framework, provide the global component and hence all that is needed from an individual sensor is the local component. In non‐Bayesian estimation methods, both components are required, and hence, many measurements are needed. On the other hand, with Bayesian methods, only the local fit is needed, and hence, fewer measurements per sensor are required. We provide the supporting theory and demonstrate on a real‐life application with real data. Copyright © 2014 John Wiley & Sons, Ltd.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call