Abstract

Since large-scale multi-objective problems (LSMOPs) have huge decision variables, the traditional evolutionary algorithms are facing difficulties of low exploitation efficiency and high exploration costs in solving LSMOPs. Therefore, this paper proposes an evolutionary strategy based on two-stage accelerated search optimizers (ATAES). Specifically, a convergence optimizer is devised in the first stage, while a three-layer lightweight convolutional neural network model is built, and the population is homogenized into two subsets, the diversity subset, and the convergence subset, which serve as input nodes and the expected output nodes of the neural network, respectively. Then, by constantly backpropagating the gradient, a satisfactory individual will be produced. Once exploitation stagnation is discovered in the first phase, the second phase will be run, where a diversity optimizer using a differential optimization algorithm with opposite learning is suggested to increase the exploration range of candidate solutions and thereby increase the population's diversity. Finally, to validate the algorithm's performance, on multi-objective LSMOP and DTLZ benchmark suits with decision variable quantities of 100, 300, 500, and 1000, the ATAES demonstrated its superiority with other advanced multi-objective evolutionary algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.