Abstract

Using neural networks to address data-driven problems often entails dealing with uncertainties. However, the propagation of uncertainty through a network’s nonlinear layers is usually a bottleneck, since the existing techniques designed to transmit Gaussian distributions via moment estimation are not capable of predicting non-Gaussian distributions. In this study, a Gaussian-mixture-based uncertainty propagation scheme is proposed for neural networks. Given that any input uncertainty can be characterized as a Gaussian mixture with a finite number of components, the developed scheme actively examines each mixture component and adaptively split those whose fidelity in representing uncertainty is deteriorated by the network’s nonlinear activation layers. A Kullback–Leibler criterion that directly measures the nonlinearity-induced non-Gaussianity in post-activation distributions is derived to trigger splitting and a set of high-precision Gaussian splitting libraries is established. Four uncertainty propagation examples on dynamic systems and data-driven applications are demonstrated, in all of which the developed scheme exhibited exemplary fidelity and efficiency in predicting the evolution of non-Gaussian distributions through both recurrent and multi-layer neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call