Abstract

All the methods for Fault Detection and Isolation (FDI) involve internal parameters, often called hyperparameters, that have to be carefully tuned. Most often, tuning is ad hoc and this makes it difficult to ensure that any comparison between methods is unbiased. We propose to consider the evaluation of the performance of a method with respect to its hyperparameters as a computer experiment, and to achieve tuning via global optimization based on Kriging and Expected Improvement. This approach is applied to several residualevaluation (or change-detection) algorithms on classical test-cases. Simulation results show the interest, practicability and performance of this methodology, which should facilitate the automatic tuning of the hyperparameters of a method and allow a fair comparison of a collection of methods on a given set of test-cases. The computational cost turns out to be much lower than the one obtained with other general-purpose optimization methods such as genetic algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.