Abstract

Combining a machine learning model within the search procedure has shown great potentials in evolutionary multiobjective optimization (EMO). The priori knowledge obtained from the property of Pareto optimal set (PS) is a great help for reproducing high-quality offspring solutions. However, the existing learning model in the framework of EMO is also accompanied with a high computational cost resulted from its iterative strategy or repetitive learning. To overcome this shortcoming, the paper proposes to approximate the PS by an incremental learning model. Specifically, it consists of two interdependent parts, i.e., a learning module and a forgetting module. The basic idea is to take the all new high-quality offspring solutions at the current evolution iteration as a data stream, and incrementally train a model based on Gaussian mixture models with the data stream to discover the manifold structure of the PS and guide the evolutionary search. The learning module is used to obtain the knowledge from the data stream in a batch manner, while the forgetting module is applied to delete the information from the relatively poor solution as is removed incrementally. The proposed algorithm is employed to test suites, and the numerical experiments demonstrates that the incremental learning model can help to improve the algorithm performance with less computational cost compared with the representative algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.