Adjoint methods have been the pillar of gradient-based optimization for decades. They enable the accurate computation of a gradient (sensitivity) of a quantity of interest with respect to all system parameters in one calculation. When the gradient is embedded in an optimization routine, the quantity of interest can be optimized for the system to have the desired behavior. Adjoint methods, however, require the system's governing equations and their Jacobian. In this paper, we propose a computational strategy to infer the adjoint sensitivities from data when the governing equations might be unknown (or partly unknown), and noise might be present. The key component of this strategy is an echo state network, which learns the dynamics of nonlinear regimes with varying parameters, and it evolves dynamically via a hidden state. Although the framework does not make assumptions on the dynamical system, we focus on thermoacoustics, which are governed by nonlinear and time-delayed systems. First, we show that a parameter-aware echo state network (ESN) infers the parametrized dynamics. Second, we derive the adjoint of the ESN to compute two types of sensitivity: (i) parameter sensitivity, which is the gradient of a time-averaged cost functional with respect to physical or design parameters of the system, and (ii) initial condition sensitivity, which is the gradient of a cost functional of the final state with respect to the initial condition. Third, we propose the thermoacoustic echo state network (T-ESN), which embeds the physical knowledge in the network architecture for improved generalization. Fourth, we apply the framework to a variety of nonlinear thermoacoustic regimes of a prototypical system. We show that the T-ESN accurately infers the correct adjoint sensitivities of the acoustic energy with respect to the flame parameters and initial conditions. The results are robust to noisy data, from periodic, through quasiperiodic, to chaotic regimes. The inferred adjoint sensitivities are employed to suppress an instability via steepest descent. We show that a single network predicts the nonlinear bifurcations on unseen scenarios, which allows it to converge to the minimum of the acoustic energy. This work opens new possibilities for gradient-based data-driven design optimization. Published by the American Physical Society 2024
Read full abstract