Abstract

Obtaining high convergence and uniform distributions remains a major challenge in most metaheuristic multi-objective optimization problems. In this article, a novel multi-objective particle swarm optimization (PSO) algorithm is proposed based on Gaussian mutation and an improved learning strategy. The approach adopts a Gaussian mutation strategy to improve the uniformity of external archives and current populations. To improve the global optimal solution, different learning strategies are proposed for non-dominated and dominated solutions. An indicator is presented to measure the distribution width of the non-dominated solution set, which is produced by various algorithms. Experiments were performed using eight benchmark test functions. The results illustrate that the multi-objective improved PSO algorithm (MOIPSO) yields better convergence and distributions than the other two algorithms, and the distance width indicator is reasonable and effective.

Highlights

  • Multi-objective optimization problems (MOPs) are very common in engineering and other areas of research, such as economics, finance, production scheduling, and aerospace engineering

  • We introduce a new multi-objective particle swarm optimization (PSO) algorithm based on Gaussian mutation and an improved learning strategy to solve MOPs

  • Unlike other MOPSOs, that often randomly select a solution from the external archive as the global optimal solution gbest, we present different learning strategies to update the individual positions of the non-dominated and dominated solutions; (3) To further measure the distribution width, the indicator Distribution width (DW) is proposed

Read more

Summary

Introduction

Multi-objective optimization problems (MOPs) are very common in engineering and other areas of research, such as economics, finance, production scheduling, and aerospace engineering. Reddy and Kumar [9] proposed an elitist-mutation multi-objective PSO (EM-MOPSO) algorithm with a strategic mechanism that effectively explores the feasible search space and speeds up the search for the true Pareto optimal region. Cheng et al [17] presented a hybrid multi-objective particle swarm optimization that combines the canonical PSO search with a teaching–learning-based optimization (TLBO) algorithm to promote diversity and improve the search ability. We introduce a new multi-objective PSO algorithm based on Gaussian mutation and an improved learning strategy to solve MOPs. The main new contributions of this work can be summarized as: (1) Gaussian mutation throw points strategy to improve the uniformity of external archives and current populations; (2) For MOPs, it is difficult to select the gbest value of velocity and update the formula.

Description of Multi-Objective Optimization Problems
Main Aspects of the Standard PSO Algorithm
Elitist Archive and Crowding Entropy
Gaussian Mutation Strategy
Improved Learning Strategy
Update External Archive
Population Elitist Incremental Strategy
Overview of the MOIPSO Algorithm
Test Problems
Convergence Measure Indicator
Distribution Measure Indicator
Algorithm Comparison
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call