Abstract

We model a partially observable deteriorating system subject to random failure. The state process follows an unobservable continuous time homogeneous Markov chain. At equidistant sampling times vector-valued observations having multivariate normal distribution with state-dependent mean and covariance matrix are obtained at a positive cost. At each sampling epoch a decision is made either to run the system until the next sampling epoch or to carry out full preventive maintenance, which is assumed to be less costly than corrective maintenance carried out upon system failure. The objective is to determine the optimal control policy that minimizes the long-run expected average cost per unit time. We show that the optimal preventive maintenance region is a convex subset of Euclidean space. We also analyze the practical three-state version of this problem in detail and show that in this case the optimal policy is a control limit policy. An efficient computational algorithm is developed for the three-state problem, illustrated by a numerical example.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.