Abstract

In this paper, a fault detection scheme for application to large-scale data acquisition systems is presented. The detection scheme has to process up to several hundreds of different measurements at a time and check them for consistency. The fault detection scheme works in three steps: First, principal component analysis of training data is used to determine non-sparse areas of the measurement space. Fault detection is accomplished by checking whether a new data record lies in a cluster of training data or not. Therefore, in a second step the distribution function of the available data is estimated using kernel regression techniques. In order to reduce the degrees of freedom and to determine clusters of data efficiently, in a third step the distribution function is approximated by a neural network. In order to use as few basis functions as possible a new training algorithm for ellipsoidal basis function networks is presented: New neurons are placed such that they approximate the distribution function in the vicinity of their centers up to the second order. This is accomplished by adapting the spread parameters using Taylor's theorem. Thus, the amount of necessary parameters and the computational effort for online supervision can be reduced dramatically. An important requirement for the fault detection scheme is that it is able to automatically adapt itself to new data. The present paper also addresses this feature. It is demonstrated how a gradient optimization with algebraic constraints can be applied to adapt a pre-existing network to new data points. Numerical examples with real data show that the proposed method produces excellent results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call