Abstract

Recent studies have demonstrated that the performance of Reference vector (RV) based Evolutionary Multi- and Many-objective Optimization algorithms could be improved, through the intervention of Machine Learning (ML) methods. These studies have shown how learning efficient search directions from the intermittent generations’ solutions, could be utilized to create pro-convergence and pro-diversity offspring, leading to better convergence and diversity, respectively. The entailing steps of data-set preparation, training of ML models, and utilization of these models, have been encapsulated as Innovized Progress operators, namely, IP2 (for convergence improvement) and IP3 (for diversity improvement). Evidently, the focus in these studies has been on proof-of-the-concept, and no exploratory analysis has been done to investigate, if and how drastically the operators’ performance may be impacted, if their underlying ML methods (Random Forest for IP2, and kNN for IP3) are varied. This paper seeks to bridge this gap, through an exploratory analysis for both IP2 and IP3, based on eight different ML methods, tested against an exhaustive test suite comprising of seven multi-objective and 32 many-objective test instances. While the results broadly endorse the robustness of the existing IP2 and IP3 operators, they also reveal interesting tradeoffs across different ML methods, in terms of the Hypervolume (HV) metric and corresponding run-time. Notably, within the gambit of the considered test suite and different ML methods adopted, kNN emerges as a winner for both IP2 and IP3, based on conjunct consideration of HV metric and run-time.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call