Abstract

AbstractData assimilation is a key component of operational systems and scientific studies for the understanding, modeling, forecasting and reconstruction of earth systems informed by observation data. Here, we investigate how physics‐informed deep learning may provide new means to revisit data assimilation problems. We develop a so‐called end‐to‐end learning approach, which explicitly relies on a variational data assimilation formulation. Using automatic differentiation embedded in deep learning framework, the key novelty of the proposed physics‐informed approach is to allow the joint training of the representation of the dynamical process of interest as well as of the solver of the data assimilation problem. We may perform this joint training using both supervised and unsupervised strategies. Our numerical experiments on Lorenz‐63 and Lorenz‐96 systems report significant gain w.r.t. a classic gradient‐based minimization of the variational cost both in terms of reconstruction performance and optimization complexity. Intriguingly, we also show that the variational models issued from the true Lorenz‐63 and Lorenz‐96 ODE representations may not lead to the best reconstruction performance. We believe these results may open new research avenues for the specification of assimilation models for earth systems, both to speed‐up the inversion problem with trainable solvers but possibly more importantly in the way data assimilation systems are designed, for instance regarding the representation of geophysical dynamics.

Highlights

  • In geoscience, the reconstruction of the dynamics of a given state or process from a sequence of partial and noisy observations of the system is referred to as a data assimilation issue

  • We introduce a novel end-to-end learning framework for data assimilation, which uses as inputs a sequence of observations and delivers as outputs a reconstructed state sequence

  • We report numerical experiments on Lorenz-63 and Lorenz-96 dynamics, which support the relevance of the proposed framework w.r.t. classic variational data assimilation schemes

Read more

Summary

Introduction

The reconstruction of the dynamics of a given state or process from a sequence of partial and noisy observations of the system is referred to as a data assimilation issue. The proposed end-to-end learning architecture combines two main components, a first neural-network architecture dedicated to the definition and evaluation of the variational assimilation cost and a second neural-network architecture corresponding to a gradient-based solver of some target loss function. The latter exploits ideas similar to optimizer learning [1, 34, 20] and directly benefits from automatic differentiation tools embedded in deep learning frameworks, so that we do not need to derive explicitly the adjoint operator of the considered dynamical model;.

Problem statement
Ω stands for the evaluation of the quadratic norm restricted to domain
End-to-end learning framework
Constrained CNN formulation for operator Φ
End-to-end architecture
Learning setting
Numerical experiments
Lorenz-63 dynamics
Lorenz-96 dynamics
Related work
Findings
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.