Abstract

In safety-critical engineering applications, such as robust prediction against adversarial noise, it is necessary to quantify neural networks' uncertainty. Interval neural networks (INNs) are effective models for uncertainty quantification, giving an interval of predictions instead of a single value for a corresponding input. This article formulates the problem of training an INN as a chance-constrained optimization problem. The optimal solution of the formulated chance-constrained optimization naturally forms an INN that gives the tightest interval of predictions with a required confidence level. Since the chance-constrained optimization problem is intractable, a sample-based continuous approximate method is used to obtain approximate solutions to the chance-constrained optimization problem. We prove the uniform convergence of the approximation, showing that it gives the optimal INN consistently with the original ones. Additionally, we investigate the reliability of the approximation with finite samples, giving the probability bound for violation with finite samples. Through a numerical example and an application case study of anomaly detection in wind power data, we evaluate the effectiveness of the proposed INN against existing approaches, including Bayesian neural networks, highlighting its capability to significantly improve the performance of applying INNs for regression and unsupervised anomaly detection.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.