Abstract

We have invented two new Bayesian deep learning algorithms using stochastic particle flow to compute Bayes’ rule. These learning algorithms have a continuum of layers, in contrast with 10 to 100 discrete layers in standard deep learning neural nets. We compute Bayes’ rule for learning using a stochastic particle flow designed with Gromov’s method. Both deep learning and standard particle filters suffer from the curse of dimensionality, and we mitigate this problem by using stochastic particle flow to compute Bayes’ rule. The intuitive explanation for the dramatic reduction in computational complexity is that stochastic particle flow adaptively moves particles to the correct region of d dimensional space to represent the multivariate probability density of the state vector conditioned on the data. There is nothing analogous to this in standard neural nets (deep or shallow), where the geometry of the network is fixed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.