Abstract

A novel multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) framework is proposed to utilize a minimal number of training samples efficiently for sequential model updates. All the sample points are enforced to be feasible, and to provide coverage of sparsely explored sparse design regions using a new optimization subproblem. The MO-ASMO method only evaluates high-fidelity functions at feasible sample points. During an exploitation sample phase, samples are selected to enhance solution accuracy rather than the global exploration. Sampling tasks are especially challenging for multiobjective optimization; for an n-dimensional design space, a strategy is required for generating model update sample points near an (n − 1)-dimensional hypersurface corresponding to the Pareto set in the design space. This is addressed here using a force-directed layout algorithm, adapted from graph visualization strategies, to distribute feasible sample points evenly near the estimated Pareto set. Model validation samples are chosen uniformly on the Pareto set hypersurface, and surrogate model estimates at these points are compared to high-fidelity model responses. All high-fidelity model evaluations are stored for later use to train an updated surrogate model. The MO-ASMO algorithm, along with the set of new sampling strategies, are tested using two mathematical and one realistic engineering problems. The second mathematical test problems is specifically designed to test the limits of this algorithm to cope with very narrow, non-convex feasible domains. It involves oscillatory objective functions, giving rise to a discontinuous set of Pareto-optimal solutions. Also, the third test problem demonstrates that the MO-ASMO algorithm can handle a practical engineering problem with more than 10 design variables and black-box simulations. The efficiency of the MO-ASMO algorithm is demonstrated by comparing the result of two mathematical problems to the results of the NSGA-II algorithm in terms of the number of high fidelity function evaluations, and is shown to reduce total function evaluations by several orders of magnitude when converging to the same Pareto sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call