Abstract

Neural network surrogate models are often used to replace complex mathematical models in black-box and grey-box optimization. This strategy essentially uses samples generated from a complex model to fit a data-driven, reduced-order model more amenable for optimization. Neural network models can be trained in Sobolev spaces, i.e., models are trained to match the complex function not only in terms of output values, but also the values of their derivatives to arbitrary degree. This paper examines the direct impacts of Sobolev training on neural network surrogate models embedded in optimization problems, and proposes a systematic strategy for scaling Sobolev-space targets during NN training. In particular, it is shown that Sobolev training results in surrogate models with more accurate derivatives (in addition to more accurately predicting outputs), with direct benefits in gradient-based optimization. Three case studies demonstrate the approach: black-box optimization of the Himmelblau function, and grey-box optimizations of a two-phase flash separator and two flashes in series. The results show that the advantages of Sobolev training are especially significant in cases of low data volume and/or optimal points near the boundary of the training dataset—areas where NN models traditionally struggle.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call