Abstract
Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided.
Highlights
1 minimize J(y, u) := Ay − g 2 H + α 2 u 2 U, subject to (s.t.) e(y, u) = 0, and u ∈ Cad, over (y, u) ∈ Y × U, (1.1)where y ∈ Y, u ∈ U are the state and control variables, respectively, with Y a suitable Banach space and U a Hilbert space
We have proposed and analyzed a general optimization scheme for solving optimal control problems subject to constraints which are governed by learning-informed differential equations
We envisage that our work will provide a fundamental framework for dealing with physical models whose underlying differential equation is partially unknown and needed to be learned by data, with the latter typically obtained from experiments or measurements
Summary
Where y ∈ Y , u ∈ U are the state and control variables, respectively, with Y a suitable Banach space and U a Hilbert space. One integrates a mathematical model of the acquisition physics (the Bloch equations [20]) into the associated image reconstruction task in order to relate qualitative information (such as the net magnetization y = ρm) with objective, tissue dependent quantitative information (such as T1 and T2, the longitudinal and the transverse relaxation times, respectively, or the proton spin density ρ) This model is used to obtain quantitative reconstructions from noisy subsampled measurement data g in k-space by a variational approach. In the present work we suggest to use a parameter-to-solution operator ΠN that is induced by trained neural networks modelling the equality constraint (with, e.g., f replaced by an ANN-based model N in our example (1.3)) or its (implicitly defined) solution map Π In such a setting, existence, convergence, stability and error bounds of the corresponding approximations need to be analyzed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have