Abstract
Neural Jump SDEs (Jump Diffusions) and Neural PDEs
Highlights
Not just with neural ordinary differential equations, and with things like neural stochastic differential equations and neural delay differential equations
There were two separate papers on neural stochastic differential equations, showing them to be the limit of deep latent Gaussian models
I wanted to show how using some methods for stiff differential equations plus a method of lines discretization gives a way to train neural partial differential equations
Summary
While the previously posted example uses forward-mode, we have found that this is much much faster on neural SDEs, so if you're trying to train them, I would recommend using this code instead (and I'll get the examples updated) To show what that looks like, let's define a jump diffusion and solve it 100 times, taking its mean as our training data: using Flux, DiffEqFlux, StochasticDiffEq, Plots, DiffEqMonteCarlo, DiffEqJump u0 = Float32[2.; 0.] datasize = 30 tspan = (0.0f0,1.0f0). G(u,p,t) = mp.*u n_sde = function (x) dudt_(u,p,t) = dudt(u) rate(u,p,t) = 2.0 affect!(integrator) = (integrator.u = dudt2(integrator.u)) jump = ConstantRateJump(rate,affect!) prob = SDEProblem(dudt_,g,param(x),tspan,nothing) jump_prob = JumpProblem(prob,Direct(),jump,save_positions=(false,false)) solve(jump_prob, SOSRI(); saveat=t ,abstol = 0.1, reltol = 0.1) |> Tracker.collect end pred = n_sde(u0) # Get the prediction using the correct initial condition dudt__(u,p,t) = Flux.data(dudt(u)). I turned it off by default because I was training this on my dinky laptop :)]
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have