Multifidelity models integrate data from multiple sources to produce a single approximator for the underlying process. Dense low-fidelity samples are used to reduce interpolation error, while sparse high-fidelity samples are used to compensate for bias or noise in the low-fidelity samples. Deep Gaussian processes (GPs) are attractive for multifidelity modeling as they are non-parametric, robust to overfitting, perform well for small datasets, and, critically, can capture nonlinear and input-dependent relationships between data of different fidelities. Many datasets naturally contain gradient data, most commonly when they are generated by computational models that have adjoint solutions or are built in automatic differentiation frameworks.Principally, this work extends deep GPs to incorporate gradient data. We demonstrate this method on an analytical test problem and two realistic aerospace problems: one focusing on a hypersonic waverider with an inviscid gas dynamics truth model and another focusing on the canonical ONERA M6 wing with a viscous Reynolds-averaged Navier-Stokes truth model.In both examples, the gradient-enhanced deep GP outperforms a gradient-enhanced linear GP model and their non-gradient-enhanced counterparts.
Read full abstract