Abstract

The paper discusses a class of stochastic models for evaluating the optimal calibration interval in measuring instruments. The model is based on the assumption that the calibration status of a measuring instrument can be monitored by means of one observable parameter. The observable parameter is undergoing a stochastic drift process. The paper introduces and compares stochastic drift models of different nature, and estimates the first passage time of the monitored parameter on a preset limit. The calibration interval is determined as a suitable percentile of the distribution function of the first passage time. A preliminary validation of the model, based on a sample of experimental data collected on a class of instruments, is finally reported.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call