Abstract

In this work we explore the properties which make many real-life global optimization problems extremely difficult to handle, and some of the common techniques used in literature to address them. We then introduce a general optimization management tool called GloMPO (Globally Managed Parallel Optimization) to help address some of the challenges faced by practitioners. GloMPO manages and shares information between traditional optimization algorithms run in parallel. We hope that GloMPO will be a flexible framework which allows for customization and hybridization of various optimization ideas, while also providing a substitute for human interventions and decisions which are a common feature of optimization processes of hard problems. GloMPO is shown to produce lower minima than traditional optimization approaches on global optimization test functions, the Lennard-Jones cluster problem, and ReaxFF reparameterizations. The novel feature of forced optimizer termination was shown to find better minima than normal optimization. GloMPO is also shown to provide qualitative benefits such a identifying degenerate minima, and providing a standardized interface and workflow manager.

Highlights

  • High‐dimensional, expensive, black‐box optimization In this work, we are interested in tackling the hardest of global optimization challenges: high-dimensional, expensive, and black-box (HEB) problems [57]

  • The win percentage is the fraction of bouts Globally Managed Parallel Optimization (GloMPO) won over the 100 bouts in a set

  • Sets involving Nudging CMA-ES (N-CMA) have been excluded from this figure for later discussion so that the effect of supervision and termination can be studied in isolation

Read more

Summary

Introduction

High‐dimensional, expensive, black‐box optimization In this work, we are interested in tackling the hardest of global optimization challenges: high-dimensional, expensive, and black-box (HEB) problems [57]. Black-box optimization problems—ones for which no gradient information is available—are generally regarded as some of the most difficult to handle. This is because optimizers can be led astray by rough surfaces, and many more function evaluations are typically needed for the optimizer to learn about the structure of the problem. High dimensionality demands increased function evaluations, but a high evaluation expense makes this infeasible. The consequence of this complexity is a significant reduction in the number of optimization algorithms which can be used. Numerous options exist to tackle problems with one or two of these difficulties, but rarely are all three addressed simultaneously [57]

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call