A variant of the particle swarm optimization for the improvement of fault diagnosis in industrial systems via faults estimation

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

A variant of the particle swarm optimization for the improvement of fault diagnosis in industrial systems via faults estimation

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.3390/math13111704
Optimizing Route Planning via the Weighted Sum Method and Multi-Criteria Decision-Making
  • May 22, 2025
  • Mathematics
  • Guanquan Zhu + 7 more

Choosing the optimal path in planning is a complex task due to the numerous options and constraints; this is known as the trip design problem (TTDP). This study aims to achieve path optimization through the weighted sum method and multi-criteria decision analysis. Firstly, this paper proposes a weighted sum optimization method using a comprehensive evaluation model to address TTDP, a complex multi-objective optimization problem. The goal of the research is to balance experience, cost, and efficiency by using the Analytic Hierarchy Process (AHP) and Entropy Weight Method (EWM) to assign subjective and objective weights to indicators such as ratings, duration, and costs. These weights are optimized using the Lagrange multiplier method and integrated into the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) model. Additionally, a weighted sum optimization method within the Traveling Salesman Problem (TSP) framework is used to maximize ratings while minimizing costs and distances. Secondly, this study compares seven heuristic algorithms—the genetic algorithm (GA), particle swarm optimization (PSO), the tabu search (TS), genetic-particle swarm optimization (GA-PSO), the gray wolf optimizer (GWO), and ant colony optimization (ACO)—to solve the TOPSIS model, with GA-PSO performing the best. The study then introduces the Lagrange multiplier method to the algorithms, improving the solution quality of all seven heuristic algorithms, with an average solution quality improvement of 112.5% (from 0.16 to 0.34). The PSO algorithm achieves the best solution quality. Based on this, the study introduces a new variant of PSO, namely PSO with Laplace disturbance (PSO-LD), which incorporates a dynamic adaptive Laplace perturbation term to enhance global search capabilities, improving stability and convergence speed. The experimental results show that PSO-LD outperforms the baseline PSO and other algorithms, achieving higher solution quality and faster convergence speed. The Wilcoxon signed-rank test confirms significant statistical differences among the algorithms. This study provides an effective method for experience-oriented path optimization and offers insights into algorithm selection for complex TTDP problems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.4067/s0718-33052011000200009
A proposal to fault diagnosis in industrial systems using bio-inspired strategies
  • Aug 1, 2011
  • Ingeniare. Revista chilena de ingeniería
  • Lídice Camps Echevarría + 2 more

"En el presente trabajo se presenta un estudio sobre la aplicación de estrategias bioinspiradas para la optimización al diagnóstico de fallos en sistemas industriales. El objetivo principal es establecer una base para el desarrollo de nuevos y viables métodos de diagnóstico de fallos basados en modelos que permitan mejorar las dificultades de los métodos actuales. Estas dificultades están relacionadas, fundamentalmente, con la sensibilidad ante la presencia de fallos y la robustez ante perturbaciones externas. En el estudio se consideraron los algoritmos Evolución Diferencial y Optimización por Colonia de Hormigas. La efectividad de la propuesta es analizada mediante experimentos con el conocido problema de prueba de los dos tanques. Los experimentos consideraron presencia de ruido en la información y fallos incipientes de manera que fuera posible analizar las ventajas de la propuesta en cuanto a diagnóstico robusto y sensible. Los resultados obtenidos indican que el enfoque propuesto y, principalmente, la combinación de los dos algoritmos, caracterizan una metodología prometedora para el diagnóstico de fallos."

  • Research Article
  • Cite Count Icon 18
  • 10.1177/1748301816665021
A review of velocity-type PSO variants
  • Sep 18, 2016
  • Journal of Algorithms & Computational Technology
  • Ivo Sousa-Ferreira + 1 more

This paper presents a review of the particular variants of particle swarm optimization, based on the velocity-type class. The original particle swarm optimization algorithm was developed as an unconstrained optimization technique, which lacks a model that is able to handle constrained optimization problems. The particle swarm optimization and its inapplicability in constrained optimization problems are solved using the dynamic-objective constraint-handling method. The dynamic-objective constraint-handling method is originally developed for two variants of the basic particle swarm optimization, namely restricted velocity particle swarm optimization and self-adaptive velocity particle swarm optimization. Also on the subject velocity-type class, a review of three other variants is given, specifically: (1) vertical particle swarm optimization; (2) velocity limited particle swarm optimization; and (3) particle swarm optimization with scape velocity. These velocity-type particle swarm optimization variants all have in common a velocity parameter which determines the direction/movements of the particles.

  • Conference Article
  • Cite Count Icon 7
  • 10.1109/cec.2015.7257009
Multimodal optimization using particle swarm optimization algorithms: CEC 2015 competition on single objective multi-niche optimization
  • May 1, 2015
  • Shi Cheng + 4 more

The aim of multimodal optimization is to locate multiple peaks/optima in a single run and to maintain these found optima until the end of a run. The results of seven variants of particle swarm optimization (PSO) algorithms on IEEE Congress on Evolutionary Computation (CEC) 2015 single objective multi-niche optimization problems are reported in this paper. The PSO algorithms include PSO with star structure, PSO with ring structure, PSO with four clusters structure, PSO with Von Neumann structure, social-only PSO with star structure, social-only PSO with ring structure, and cognition-only PSO. The experimental tests are conducted on fifteen benchmark functions. Based on the experimental results, the conclusions could be made that the PSO with ring structure performs better than the other PSO variants on multimodal optimization. To obtain good performance on the multimodal optimization problems, an algorithm needs to converge the candidate solutions to the global optima while keep the population diversity during whole search process.

  • Research Article
  • Cite Count Icon 6
  • 10.5897/sre12.056
Hybrid intelligent algorithm [improved particle swarm optimization (PSO) with ant colony optimization (ACO)] for multiprocessor job scheduling
  • May 30, 2012
  • Scientific Research and Essays
  • K Thanushkodi

Efficient multiprocessor scheduling is essentially the problem of allocating a set of computational jobs to a set of processors to minimize the overall execution time. The main issue is how jobs are partitioned in which total finishing time and waiting time is minimized. Minimization of these two criteria simultaneously, is a multi objective optimization problem. There are many variations of this problem, most of which are NP-hard problem, so we must rely on heuristics to solve the problem instances. Many heuristic-based approaches have been applied to finding schedules that minimize the execution time of computing tasks on parallel processors. Particle swarm optimization (PSO) is currently employed in several optimization and search problems due to its ease and ability to find solutions successfully. A variant of PSO, called as improved particle swarm optimization (ImPSO) has been developed in this paper and is hybridized with the ant colony optimization (ACO) to achieve better solutions. The proposed hybrid algorithm effectively exploits the capabilities of distributed and parallel computing of swarm intelligence approaches. In addition hybrid algorithm using improved particle swarm optimization (ImPSO) with artificial immune system (AIS) is also implemented for the same set of problems to compare with the proposed hybrid algorithm (ImPSO with ACO). It was observed that the proposed hybrid approach (Improved PSO with ACO) gives better results in experiments and reduces finishing and waiting time simultaneously. Key words: Particle swarm optimization (PSO), improved particle swarm optimization (ImPSO), ant colony optimization (ACO), job scheduling, finishing time, waiting time.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/ssci.2015.188
On the Performance of Particle Swarm Optimization Algorithms in Solving Cheap Problems
  • Dec 1, 2015
  • Abdullah Al-Dujaili + 2 more

Eight variants of the Particle Swarm Optimization (PSO) algorithm are discussed and experimentally compared among each other. The chosen PSO variants reflect recent research directions on PSO, namely parameter tuning, neighborhood topology, and learning strategies. The Comparing Continuous Optimizers (COCO) methodology was adopted in comparing these variants on the noiseless BBOB test bed. Based on the results, we provide useful insights regarding PSO variants' relative efficiency and effectiveness under a cheap budget of function evaluations, and draw suggestions about which variant should be used depending on what we know about our optimization problem in terms of evaluation budget, dimensionality, and function structure. Furthermore, we propose possible future research directions addressing the limitations of latest PSO variants. We hope this paper would mark a milestone in assessing the state-of-the-art PSO algorithms, and become a reference for swarm intelligence community regarding this matter.

  • Conference Article
  • 10.1109/cec48606.2020.9185828
Evolving Order and Chaos: Comparing Particle Swarm Optimization and Genetic Algorithms for Global Coordination of Cellular Automata
  • Jul 1, 2020
  • Anthony D Rhodes

We apply two evolutionary search algorithms: Particle Swarm Optimization (PSO) and Genetic Algorithms (GAs) to the design of Cellular Automata (CA) that can perform computational tasks requiring global coordination. In particular, we compare search efficiency for PSO and GAs applied to both the density classification problem and to the novel generation of 'chaotic' CA. Our work furthermore introduces a new variant of PSO, the Binary Global-Local PSO (BGL-PSO).

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/itng.2014.109
Performance Comparison of Partical Swarm Optimization Variant Models
  • Apr 1, 2014
  • Bing Qi + 1 more

In this work, an extensively comparative study is conducted to demonstrate the performance of Particle Swarm Optimization (PSO) variants based on five well-known benchmark functions in the area. According to the PSO's cognitive and social factors' contribution, we categorize the PSO algorithm into five variants. Different from other research work, which included only four PSO models, we propose an extra PSO variant called selfless Full-Model. Therefore, the five PSO variants, which named PSO Full-Model, PSO Cognitive-Only Model, PSO Social-Only Model, PSO Selfless Model and PSO Selfless Full-model, respectively, are applied to solve the benchmark functions. Their performances are compared based on the success rate, average function evaluations and the best fitness.

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-981-19-8703-8_18
Optimized Machine Learning Model with Modified Particle Swarm Optimization for Data Classification
  • Jan 1, 2023
  • Kah Sheng Lim + 7 more

Metaheuristic search algorithms (MSAs) receive increasing popularity in recent year due to its excellent capability of solving complex real-world optimization problems without depending on gradient information. Particle swarm optimization (PSO), as one of MSAs, is widely used in optimization task due to its simple framework and quick convergence speed toward global optimum. However, conventional PSO suffers from premature convergence and quick diversity loss of population when the population is poorly initialized due to its random characteristics. In this paper, a new variant of PSO namely PSO with multi-chaotic scheme (PSOMCS) is introduced to train artificial neural network (ANN) by optimizing its neuron weights, biases and selection of suitable activation function based on the datasets obtained from UCI machine learning repository. Initial population generated using multi-chaotic system and oppositional-based learning ensure broader search space coverage, enabling PSOMCS to solve complex optimization problems effectively. Classification performances of ANN trained with PSOMCS are compared with other existing PSO variants. Based on simulation results, ANN optimized by PSOMCS outperformed its competitors in terms of classification performance for both training and testing datasets.

  • Research Article
  • Cite Count Icon 10
  • 10.1007/s11047-013-9408-3
Quadratic interpolation based orthogonal learning particle swarm optimization algorithm
  • Dec 22, 2013
  • Natural Computing
  • Ruochen Liu + 4 more

Particle swarm optimization (PSO) is a population based algorithm for solving global optimization problems. Owing to its efficiency and simplicity, PSO has attracted many researchers' attention and developed many variants. Orthogonal learning particle swarm optimization (OLPSO) is proposed as a new variant of PSO that relies on a new learning strategy called orthogonal learning strategy. The OLPSO differs in the utilization of the information of experience from the standard PSO, in which each particle utilizes its historical best experience and globally best experience through linear summation. In OLPSO, particles can fly in better directions by constructing an efficient exemplar through orthogonal experimental design. However, the global version based orthogonal learning PSO (OLPSO-G) still have some drawbacks in solving some complex multimodal function optimization. In this paper, we proposed a quadratic interpolation based OLPSO-G (QIOLPSO-G), in which, a quadratic interpolation based construction strategy for the personal historical best experience is applied. Meanwhile, opposition-based learning, and Gaussian mutation are also introduced into this paper to increase the diversity of the population and discourage the premature convergence. Experiments are conducted on 16 benchmark problems to validate the effectiveness of the QIOLPSO-G, and comparisons are made with four typical PSO algorithms. The results show that the introduction of the three strategies does enhance the effectiveness of the algorithm.

  • Research Article
  • Cite Count Icon 41
  • 10.1016/j.eswa.2015.12.008
A low-complexity hybrid algorithm based on particle swarm and ant colony optimization for large-MIMO detection
  • Dec 29, 2015
  • Expert Systems with Applications
  • Manish Mandloi + 1 more

A low-complexity hybrid algorithm based on particle swarm and ant colony optimization for large-MIMO detection

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1155/2023/3160184
Energy Dispatching Based on an Improved PSO‐ACO Algorithm
  • Jan 1, 2023
  • International Journal of Intelligent Systems
  • Qisong Song + 5 more

In order to improve the comprehensive performance of energy dispatching between different sites, the optimization research of particle swarm optimization (PSO) algorithm and ant colony optimization (ACO) algorithm is carried out. We proposed a new improved PSO‐ACO algorithm based on the idea of hybrid algorithm to solve the problem of poor energy dispatching efficiency between sites. First, the multiobjective performance indicators were introduced to transform the sites’ energy dispatching problem into a multiobjective optimization problem. Second, the vitality factor was introduced into the PSO strategy to solve the local optimal problem, and in the PSO‐ACO fusion strategy, the PSO routes were transformed into the ant colony enhancement pheromone to accelerate the accumulation speed of the ACO initial pheromone. Then, the angle guidance function was introduced into the state transition probability of the ACO strategy to improve the global search capability, and a high‐quality pheromone update rule was proposed to improve the convergence speed of the algorithm. Finally, simulation experiments were carried out on the improved PSO‐ACO algorithm, Min–Max Ant System (MMAS) algorithm, ACO algorithm, PSO algorithm, and PSO update algorithm in a variety of complex site scenarios. The simulation results show that the improved PSO‐ACO algorithm can plan a site energy dispatching route with shorter route, less time‐consuming, and higher security and realize the comprehensive and global optimization of energy dispatching.

  • Research Article
  • Cite Count Icon 15
  • 10.1007/s00500-015-1784-4
Self-adapting hybrid strategy particle swarm optimization algorithm
  • Jul 23, 2015
  • Soft Computing
  • Chuan Wang + 3 more

Particle swarm optimization (PSO) algorithm has shown promising performances on various benchmark functions and engineering optimization problems. However, it is still difficult to achieve a satisfying trade-off between exploration and exploitation for all the optimization problems and different evolving stages. Furthermore, control parameters of some related mechanisms need pre-experience by the requirement of trial-and-error scheme. This paper presents a novel PSO algorithm, which adaptively adopts various search strategies, called Self-adapting Hybrid Strategy PSO (SaHSPS). Unlike some other peer PSO variants, this method dynamically changes the probabilities of different strategies according to their previous successful searching memories, without any additional control parameters. The probabilities of different strategies would be re-initialized according to a proposed dynamic probabilistic model to diverse the population. Besides, particles are updated by probabilistically selected strategies after niching PSO with Ring topology. Moreover, a dynamic updating mechanism by niching PSO is proposed to guarantee the parallel searching capability during the whole evolution process. Thus, this proposed algorithm might be problem-independent and search-stage-independent, yielding more satisfying solutions on various optimization problems. A comprehensive experimental study is conducted on 28 benchmark functions of CEC 2013 special session on real-parameter optimization, including shifted, rotated, multi-modal, high conditioned, expanded and composition problems, compared with several state-of-the-art variants of PSO and differential evolution (DE) algorithms. Comparison results show that SaHSPS obtains outstanding performances on the majority of the test problems. Moreover, a practical engineering problem, real power loss minimization of IEEE 30-bus power system, is used to further evaluate SaHSPS. The numerical results, compared with other stochastic search algorithms, show that SaHSPS could find high-quality solutions with higher probability.

  • Research Article
  • Cite Count Icon 9
  • 10.1038/s41598-024-68744-6
A gene selection algorithm for microarray cancer classification using an improved particle swarm optimization
  • Aug 23, 2024
  • Scientific Reports
  • Arfan Ali Nagra + 6 more

Gene selection is an essential step for the classification of microarray cancer data. Gene expression cancer data (deoxyribonucleic acid microarray] facilitates in computing the robust and concurrent expression of various genes. Particle swarm optimization (PSO) requires simple operators and less number of parameters for tuning the model in gene selection. The selection of a prognostic gene with small redundancy is a great challenge for the researcher as there are a few complications in PSO based selection method. In this research, a new variant of PSO (Self-inertia weight adaptive PSO) has been proposed. In the proposed algorithm, SIW-APSO-ELM is explored to achieve gene selection prediction accuracies. This novel algorithm establishes a balance between the exploitation and exploration capabilities of the improved inertia weight adaptive particle swarm optimization. The self-inertia weight adaptive particle swarm optimization (SIW-APSO) algorithm is employed for solution explorations. Each particle in the SIW-APSO increases its position and velocity iteratively through an evolutionary process. The extreme learning machine (ELM) has been designed for the selection procedure. The proposed method has been employed to identify several genes in the cancer dataset. The classification algorithm contains ELM, K-centroid nearest neighbor, and support vector machine to attain high forecast accuracy as compared to the start-of-the-art methods on microarray cancer datasets that show the effectiveness of the proposed method.

  • Conference Article
  • Cite Count Icon 12
  • 10.1109/cec.2011.5949695
Hierarchical dynamic neighborhood based Particle Swarm Optimization for global optimization
  • Jun 1, 2011
  • Pradipta Ghosh + 3 more

Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. In this article, we introduce a new variant of PSO referred to as Hierarchical D-LPSO (Dynamic Local Neighborhood based Particle Swarm Optimization). In this new variant of PSO the particles are arranged following a dynamic hierarchy. Within each hierarchy the particles search for better solution using dynamically varying sub-swarms i.e. these sub-swarms are regrouped frequently and information is exchanged among them. Whether a particle will move up or down the hierarchy depends on the quality of its so-far best found result. The swarm is largely influenced by the good particles that move up in the hierarchy. The performance of Hierarchical D-LPSO is tested on the set of 25 numerical benchmark functions taken from the competition and special session on real parameter optimization held under IEEE Congress on Evolutionary Computation (CEC) 2005. The results have been compared to those obtained with a few best-known variants of PSO as well as a few significant existing evolutionary algorithms.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.