Abstract

ABSTRACTThis article proposes a novel method to accelerate the boundary feedback control design of cascaded parabolic difference equations (PDEs) through DeepONet. The backstepping method has been widely used in boundary control problems of PDE systems, but solving the backstepping kernel function can be time‐consuming. To address this, a neural operator (NO) learning scheme is leveraged for accelerating the control design of cascaded parabolic PDEs. DeepONet, a class of deep neural networks designed for approximating nonlinear operators, has shown potential for approximating PDE backstepping designs in recent studies. Specifically, we focus on approximating gain kernel PDEs for two cascaded parabolic PDEs. We utilize neural operators to map only two kernel functions, while the other two are computed using the analytical solution, thus simplifying the training process. We establish the continuity and boundedness of the kernels, and demonstrate the existence of arbitrarily close DeepONet approximations to the kernel PDEs. Furthermore, we demonstrate that the DeepONet approximation gain kernels ensure stability when replacing the exact backstepping gain kernels. Notably, DeepONet operator exhibits computation speeds two orders of magnitude faster than PDE solvers for such gain functions, and their theoretically proven stabilizing capability is validated through simulations.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.