Abstract

As industrial processes grow increasingly complex, fault identification becomes challenging, and even minor errors can significantly impact both productivity and system safety. Fault detection and diagnosis (FDD) has emerged as a crucial strategy for maintaining system reliability and safety through condition monitoring and abnormality recovery to manage this challenge. Statistical-based FDD methods that rely on large-scale process data and their features have been developed for detecting faults. This paper overviews recent investigations and developments in statistical-based FDD methods, focusing on probabilistic models. The theoretical background of these models is presented, including Bayesian learning and maximum likelihood. We then discuss various techniques and methodologies, e.g., probabilistic principal component analysis (PPCA), probabilistic partial least squares (PPLS), probabilistic independent component analysis (PICA), probabilistic canonical correlation analysis (PCCA), and probabilistic Fisher discriminant analysis (PFDA). Several test statistics are analyzed to evaluate the discussed methods. In industrial processes, these methods require complex matrix operation and cost computational load. Finally, we discuss the current challenges and future trends in FDD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call