Abstract

This paper describes the transition from neural network architecture to ordinary differential equations and initial value problem. Two neural network architectures are compared: classical RNN and ODERNN, which uses neural ordinary differential equations. The paper proposes a new architecture of p-ODE-RNN, which allows you to achieve a quality comparable to ODE-RNN, but is trained much faster. Furthermore, the derivation of the proposed architecture in terms of random process theory is discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call