Abstract

Scalability of optimization algorithms is a major challenge in coping with the ever-growing size of optimization problems in a wide range of application areas from high-dimensional machine learning to complex large-scale engineering problems. The field of large-scale global optimization is concerned with improving the scalability of global optimization algorithms, particularly, population-based metaheuristics. Such metaheuristics have been successfully applied to continuous, discrete, or combinatorial problems ranging from several thousand dimensions to billions of decision variables. In this two-part survey, we review recent studies in the field of large-scale black-box global optimization to help researchers and practitioners gain a bird’s-eye view of the field, learn about its major trends, and the state-of-the-art algorithms. Part I of the series covers two major algorithmic approaches to large-scale global optimization: 1) problem decomposition and 2) memetic algorithms. Part II of the series covers a range of other algorithmic approaches to large-scale global optimization, describes a wide range of problem areas, and finally, touches upon the pitfalls and challenges of current research and identifies several potential areas for future research.

Highlights

  • The curse of dimensionality is “a malediction that has plagued the scientists from the earliest days" [1] and taming it has been at the heart of many research efforts in computational sciences ranging from computational linear algebra [2] and machine learning [3] to numerical optimization [4]

  • When observing a nonzero value it is important to find whether it is caused by a genuine interaction or by computational errors. This clearly affects the choice of ǫ which has been investigated in several studies. gDG [156] normalizes the values which make it less sensitive and uses tighter threshold they call σ to detect interactions

  • We review the relevant memetic and other hybrid algorithms used for largescale global optimization in reference to the above design considerations

Read more

Summary

INTRODUCTION

The curse of dimensionality is “a malediction that has plagued the scientists from the earliest days" [1] and taming it has been at the heart of many research efforts in computational sciences ranging from computational linear algebra [2] and machine learning [3] to numerical optimization [4]. Large-scale optimization is concerned with the scalability of optimization algorithms as n grows in size and its effect on the number of constraints and their dimensionality. Advances in machine learning and the rise of deep artificial neural networks has resulted in optimization problems with over a billion variables [8, 9] These optimization problems grow in size but do so in an exponential manner, i.e., the number of decision variables they entail grows exponentially [10]. This rapid growth has stimulated scientific research in various areas to build scalable optimization algorithms.

Objective
LINKAGE LEARNING AND EXPLOITING PROBLEM STRUCTURE
Implicit Methods
Explicit Methods
Advantages and disadvantages of explicit and implicit methods
HYBRIDIZATION AND MEMETIC ALGORITHMS
Findings
CONCLUDING REMARKS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call