EESB-FDO: enhancing the fitness-dependent optimizer through a modified boundary handling mechanism

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The fitness-dependent optimizer (FDO) has recently gained attention as an effective metaheuristic for solving different optimization problems. However, it faces limitations in exploitation and convergence speed. To overcome these challenges, this study introduces two enhanced variants: enhancing exploitation through stochastic boundary for FDO (EESB-FDO) and enhancing exploitation through boundary carving for FDO (EEBC-FDO). In addition, the ELFS strategy is proposed to constrain Levy flight steps, ensuring more stable exploration. Experimental results show that these modifications significantly improve the performance of FDO compared to the original version. To evaluate the performance of the EESB-FDO and EEBC-FDO, three primary categories of benchmark test functions were utilized: classical, CEC 2019, and CEC 2022. The assessment was further supported by the application of statistical analysis methods to ensure a comprehensive and rigorous performance evaluation. The performance of the proposed EESB-FDO and EEBC-FDO algorithms was evaluated through comparative analysis with several existing FDO modifications, as well as with other well-established metaheuristic algorithms, including the Arithmetic Optimization Algorithm (AOA), the Learner Performance-Based Behavior Algorithm (LPB), the Whale Optimization Algorithm (WOA), and the Fox-inspired Optimization Algorithm (FOX). The statistical analysis indicated that both EESB-FDO and EEBC-FDO exhibit better performance compared to the aforementioned algorithms. Furthermore, a final evaluation involved applying EESB-FDO and EEBC-FDO to four real-world optimization problems: the gear train design problem, the three-bar truss problem, the pathological igg fraction in the nervous system, and the integrated cyber-physical attack on a manufacturing system. The results demonstrate that both proposed variants significantly outperform both the FDO and the modified fitness-dependent optimizer (MFDO) in solving these complex problems.

Similar Papers
  • Book Chapter
  • Cite Count Icon 1
  • 10.1201/9781003315476-3
Fitness-Dependent Optimizer for IoT Healthcare Using Adapted Parameters
  • Jan 13, 2023
  • Aso M Aladdin + 9 more

In fitness-dependent optimizer (FDO), the search agent’s position is updated using speed or velocity, but it is done differently. It creates weights based on the fitness function value of the problem, which assists lead the agents through the exploration and exploitation processes. Other algorithms are evaluated and compared to FDO as genetic algorithm (GA) and particle swarm optimization (PSO) in the original work. The salp-swarm algorithm (SSA), dragonfly algorithm (DA), and whale optimization algorithm (WOA) have been evaluated against FDO in terms of their results. Using these FDO experimental findings, we may conclude that FDO outperforms the other techniques stated. There are two primary goals for this chapter: (1) The implementation of FDO will be shown step-by-step so that readers can better comprehend the algorithm method and apply FDO to solve real-world applications quickly. (2) It deals with how to tweak the FDO settings to make the metaheuristic evolutionary algorithm better in the IoT health service system at evaluating big quantities of information. Ultimately, the target of this chapter’s enhancement is to adapt the IoT healthcare framework based on FDO to spawn effective IoT healthcare applications for reasoning out real-world optimization, aggregation, prediction, segmentation, and other technological problems.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 33
  • 10.1109/access.2020.2968064
Improved Fitness-Dependent Optimizer Algorithm
  • Jan 1, 2020
  • IEEE Access
  • Danial Abdulkareem Muhammed + 2 more

The fitness-dependent optimizer (FDO) algorithm was recently introduced in 2019. An improved FDO (IFDO) algorithm is presented in this work, and this algorithm contributes considerably to refining the ability of the original FDO to address complicated optimization problems. To improve the FDO, the IFDO calculates the alignment and cohesion and then uses these behaviors with the pace at which the FDO updates its position. Moreover, in determining the weights, the FDO uses the weight factor (wf), which is zero in most cases and one in only a few cases. Conversely, the IFDO performs wf randomization in the [0-1] range and then minimizes the range when a better fitness weight value is achieved. In this work, the IFDO algorithm and its method of converging on the optimal solution are demonstrated. Additionally, 19 classical standard test function groups are utilized to test the IFDO, and then the FDO and three other well-known algorithms, namely, the particle swarm algorithm (PSO), dragonfly algorithm (DA), and genetic algorithm (GA), are selected to evaluate the IFDO results. Furthermore, the CECC06 2019 Competition, which is the set of IEEE Congress of Evolutionary Computation benchmark test functions, is utilized to test the IFDO, and then, the FDO and three recent algorithms, namely, the salp swarm algorithm (SSA), DA and whale optimization algorithm (WOA), are chosen to gauge the IFDO results. The results show that IFDO is practical in some cases, and its results are improved in most cases. Finally, to prove the practicability of the IFDO, it is used in real-world applications.

  • Research Article
  • Cite Count Icon 262
  • 10.1109/access.2019.2907012
Fitness Dependent Optimizer: Inspired by the Bee Swarming Reproductive Process
  • Jan 1, 2019
  • IEEE Access
  • Jaza Mahmood Abdullah + 1 more

In this paper, a novel swarm intelligent algorithm is proposed, known as the fitness dependent optimizer (FDO). The bee swarming reproductive process and their collective decision-making have inspired this algorithm; it has no algorithmic connection with the honey bee algorithm or the artificial bee colony algorithm. It is worth mentioning that FDO is considered a particle swarm optimization (PSO)-based algorithm that updates the search agent position by adding velocity (pace). However, FDO calculates velocity differently; it uses the problem fitness function value to produce weights, and these weights guide the search agents during both the exploration and exploitation phases. Throughout the paper, the FDO algorithm is presented, and the motivation behind the idea is explained. Moreover, FDO is tested on a group of 19 classical benchmark test functions, and the results are compared with three well-known algorithms: PSO, the genetic algorithm (GA), and the dragonfly algorithm (DA), additionally, FDO is tested on IEEE Congress of Evolutionary Computation Benchmark Test Functions (CEC-C06, 2019 Competition) [1]. The results are compared with three modern algorithms: (DA), the whale optimization algorithm (WOA), and the salp swarm algorithm (SSA). The FDO results show better performance in most cases and comparative results in other cases. Furthermore, the results are statistically tested with the Wilcoxon rank-sum test to show the significance of the results. Likewise, FDO stability in both the exploration and exploitation phases is verified and performance-proofed using different standard measurements. Finally, FDO is applied to real-world applications as evidence of its feasibility.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 119
  • 10.1007/s10489-022-03533-0
FOX: a FOX-inspired optimization algorithm
  • Apr 24, 2022
  • Applied Intelligence
  • Hardi Mohammed + 1 more

This paper proposes a novel nature-inspired optimization algorithm called the Fox optimizer (FOX) which mimics the foraging behavior of foxes in nature when hunting preys. The algorithm is based on techniques for measuring the distance between the fox and its prey to execute an efficient jump. After presenting the mathematical models and the algorithm of FOX, five classical benchmark functions and CEC2019 benchmark test functions are used to evaluate it’s performance. The FOX algorithm is also compared against the Dragonfly optimization Algorithm (DA), Particle Swarm Optimization (PSO), Fitness Dependent Optimizer (FDO), Grey Wolf Optimization (GWO), Whale Optimization Algorithm (WOA), Chimp Optimization Algorithm (ChOA), Butterfly Optimization Algorithm (BOA) and Genetic Algorithm (GA). The results indicate that FOX outperforms the above-mentioned algorithms. Subsequently, the Wilcoxon rank-sum test is used to ensure that FOX is better than the comparative algorithms in statistically significant manner. Additionally, parameter sensitivity analysis is conducted to show different exploratory and exploitative behaviors in FOX. The paper also employs FOX to solve engineering problems, such as pressure vessel design, and it is also used to solve electrical power generation: economic load dispatch problems. The FOX has achieved better results in terms of optimizing the problems against GWO, PSO, WOA, and FDO.

  • Research Article
  • Cite Count Icon 22
  • 10.1016/j.asoc.2023.110701
A fusion algorithm based on whale and grey wolf optimization algorithm for solving real-world optimization problems
  • Aug 3, 2023
  • Applied Soft Computing
  • Qian Yang + 3 more

A fusion algorithm based on whale and grey wolf optimization algorithm for solving real-world optimization problems

  • Book Chapter
  • Cite Count Icon 4
  • 10.1007/978-981-16-6887-6_31
Performance of Artificial Electric Field Algorithm on 100 Digit Challenge Benchmark Problems (CEC-2019)
  • Jan 1, 2022
  • Dikshit Chauhan + 1 more

The Artificial Electric Field Algorithm (AEFA) is a population-based stochastic optimization algorithm for solving continuous and discrete optimization problems, and it is based on Coulomb’s law of electrostatic force and Newton’s laws of motion. Over the years, AEFA has been used to solve many challenging optimization problems. In this article, AEFA is used to solve 100 digit challenge benchmark problems, and the experimental results of AEFA are compared with recently proposed algorithms such as dragonfly algorithm (DA), whale optimization algorithm (WOA), the salp swarm optimization (SSA), and fitness dependent optimizer (FDO). The performance of AEFA is found to be very competitive and satisfactory in comparison with other optimization algorithms chosen in the article.KeywordsMetaheuristic algorithmsOptimizationSwarm intelligentArtificial electric field algorithm AEFA100 digit challenge

  • Research Article
  • Cite Count Icon 110
  • 10.1007/s00521-020-04823-9
A novel hybrid GWO with WOA for global numerical optimization and solving pressure vessel design
  • Mar 10, 2020
  • Neural Computing and Applications
  • Hardi Mohammed + 1 more

A recent metaheuristic algorithm, such as Whale Optimization Algorithm (WOA), was proposed. The idea of proposing this algorithm belongs to the hunting behavior of the humpback whale. However, WOA suffers from poor performance in the exploitation phase and stagnates in the local best solution. Grey Wolf Optimization (GWO) is a very competitive algorithm comparing to other common metaheuristic algorithms as it has a super performance in the exploitation phase while it is tested on unimodal benchmark functions. Therefore, the aim of this paper is to hybridize GWO with WOA to overcome the problems. GWO can perform well in exploiting optimal solutions. In this paper, a hybridized WOA with GWO which is called WOAGWO is presented. The proposed hybridized model consists of two steps. Firstly, the hunting mechanism of GWO is embedded into the WOA exploitation phase with a new condition which is related to GWO. Secondly, a new technique is added to the exploration phase to improve the solution after each iteration. Experimentations are tested on three different standard test functions which are called benchmark functions: 23 common functions, 25 CEC2005 functions and 10 CEC2019 functions. The proposed WOAGWO is also evaluated against original WOA, GWO and three other commonly used algorithms. Results show that WOAGWO outperforms other algorithms depending on the Wilcoxon rank-sum test. Finally, WOAGWO is likewise applied to solve an engineering problem such as pressure vessel design. Then the results prove that WOAGWO achieves optimum solution which is better than WOA and Fitness Dependent Optimizer (FDO).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 41
  • 10.3390/math9070781
Mexican Axolotl Optimization: A Novel Bioinspired Heuristic
  • Apr 3, 2021
  • Mathematics
  • Yenny Villuendas-Rey + 4 more

When facing certain problems in science, engineering or technology, it is not enough to find a solution, but it is essential to seek and find the best possible solution through optimization. In many cases the exact optimization procedures are not applicable due to the great computational complexity of the problems. As an alternative to exact optimization, there are approximate optimization algorithms, whose purpose is to reduce computational complexity by pruning some areas of the problem search space. To achieve this, researchers have been inspired by nature, because animals and plants tend to optimize many of their life processes. The purpose of this research is to design a novel bioinspired algorithm for numeric optimization: the Mexican Axolotl Optimization algorithm. The effectiveness of our proposal was compared against nine optimization algorithms (artificial bee colony, cuckoo search, dragonfly algorithm, differential evolution, firefly algorithm, fitness dependent optimizer, whale optimization algorithm, monarch butterfly optimization, and slime mould algorithm) when applied over four sets of benchmark functions (unimodal, multimodal, composite and competition functions). The statistical analysis shows the ability of Mexican Axolotl Optimization algorithm of obtained very good optimization results in all experiments, except for composite functions, where the Mexican Axolotl Optimization algorithm exhibits an average performance.

  • Research Article
  • Cite Count Icon 5
  • 10.1109/access.2022.3197290
Modified Fitness Dependent Optimizer for Solving Numerical Optimization Functions
  • Jan 1, 2022
  • IEEE Access
  • Jumaa Fatih Salih + 2 more

The Fitness Dependent Optimizer (FDO) is a recent metaheuristic algorithm that was developed in 2019. It is one of the metaheuristic algorithms that has been used by researchers to solve various applications especially for engineering design problem. In this paper, a comprehensive survey conducted about FDO and its applications. Consequently, despite of having competitive performance of FDO, it has two major problems including low exploitation and slow convergence. Therefore, a modification of FDO (MFDO) is proposed for solving FDO issues. MFDO used two methods to enhance the performance of FDO: firstly, optimizing the range of weight factor between 0 and 0.2 which is used for finding fitness weight. Secondly, using sine cardinal mathematical function to update fitness weight and pace which is referred to the speed of the bees. To evaluate the performance of MFDO, 19 classical benchmark functions and CEC2019 benchmark functions are used. MFDO compared against all the enhancement of FDO and also it is compared with Grey Wolf Optimization (GWO), Chimp Optimization Algorithm (ChOA), Genetic Algorithm (GA), and Butterfly Optimization Algorithm (BOA). Statistical results proved that MFDO achieved significant performance compared to other algorithms. Finally, MFDO is used to solve three applications: Welded Beam Design (WDB), Pressure Vessel Design (PVD), and Spring Design Problem. Results proved that MFDO outperformed well in solving these applications against FDO, Gravitational Search Algorithm (GSA), GA, and Grasshopper Optimization Algorithm (GOA).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 10
  • 10.1109/access.2021.3111033
Hybrid Sine Cosine and Fitness Dependent Optimizer for Global Optimization
  • Jan 1, 2021
  • IEEE Access
  • Po Chan Chiu + 3 more

The fitness-dependent optimizer (FDO), a newly proposed swarm intelligent algorithm, is focused on the reproductive mechanism of bee swarming and collective decision-making. To optimize the performance, FDO calculates velocity (pace) differently. FDO calculates weight using the fitness function values to update the search agent position during the exploration and exploitation phases. However, the FDO encounters slow convergence and unbalanced exploitation and exploration. Hence, this study proposes a novel hybrid of the sine cosine algorithm and fitness-dependent optimizer (SC-FDO) for updating the velocity (pace) using the sine cosine scheme. This proposed algorithm, SC-FDO, has been tested over 19 classical and 10 IEEE Congress of Evolutionary Computation (CEC-C06 2019) benchmark test functions. The findings revealed that SC-FDO achieved better performances in most cases than the original FDO and well-known optimization algorithms. The proposed SC-FDO improved the original FDO by achieving a better exploit-explore tradeoff with a faster convergence speed. Additionally, the SC-FDO was applied to the missing data estimation cases and refined the missingness as optimization problems. This is the first time, to our knowledge, that nature-inspired algorithms have been considered for handling time series datasets with low and high missingness problems (10%-90%). The impacts of missing data on the predictive ability of the proposed SC-FDO were evaluated using a large weather dataset from the year 1985 until 2020. The results revealed that the imputation sensitivity depends on the percentages of missingness and the imputation models. The findings demonstrated that the SC-FDO based multilayer perceptron (MLP) trainer outperformed the other three optimizer trainers with the highest average accuracy of 90% when treating the high-low missingness in the dataset.

  • Research Article
  • Cite Count Icon 4
  • 10.1155/2022/7055910
Improved Fitness-Dependent Optimizer for Solving Economic Load Dispatch Problem.
  • Jul 11, 2022
  • Computational Intelligence and Neuroscience
  • Barzan Hussein Tahir + 6 more

Economic load dispatch depicts a fundamental role in the operation of power systems, as it decreases the environmental load, minimizes the operating cost, and preserves energy resources. The optimal solution to economic load dispatch problems and various constraints can be obtained by evolving several evolutionary and swarm-based algorithms. The major drawback to swarm-based algorithms is premature convergence towards an optimal solution. Fitness-dependent optimizer is a novel optimization algorithm stimulated by the decision-making and reproductive process of bee swarming. Fitness-dependent optimizer (FDO) examines the search spaces based on the searching approach of particle swarm optimization. To calculate the pace, the fitness function is utilized to generate weights that direct the search agents in the phases of exploitation and exploration. In this research, the authors have used a fitness-dependent optimizer to solve the economic load dispatch problem by reducing fuel cost, emission allocation, and transmission loss. Moreover, the authors have enhanced a novel variant of the fitness-dependent optimizer, which incorporates novel population initialization techniques and dynamically employed sine maps to select the weight factor for the fitness-dependent optimizer. The enhanced population initialization approach incorporates a quasi-random Sabol sequence to generate the initial solution in the multidimensional search space. A standard 24-unit system is employed for experimental evaluation with different power demands. The empirical results obtained using the enhanced variant of the fitness-dependent optimizer demonstrate superior performance in terms of low transmission loss, low fuel cost, and low emission allocation compared to the conventional fitness-dependent optimizer. The experimental study obtained 7.94E−12, the lowest transmission loss using the enhanced fitness-dependent optimizer. Correspondingly, various standard estimations are used to prove the stability of the fitness-dependent optimizer in phases of exploitation and exploration.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 28
  • 10.3390/math9233111
ANA: Ant Nesting Algorithm for Optimizing Real-World Problems
  • Dec 2, 2021
  • Mathematics
  • Deeam Najmadeen Hama Rashid + 2 more

In this paper, a novel swarm intelligent algorithm is proposed called ant nesting algorithm (ANA). The algorithm is inspired by Leptothorax ants and mimics the behavior of ants searching for positions to deposit grains while building a new nest. Although the algorithm is inspired by the swarming behavior of ants, it does not have any algorithmic similarity with the ant colony optimization (ACO) algorithm. It is worth mentioning that ANA is considered a continuous algorithm that updates the search agent position by adding the rate of change (e.g., step or velocity). ANA computes the rate of change differently as it uses previous, current solutions, fitness values during the optimization process to generate weights by utilizing the Pythagorean theorem. These weights drive the search agents during the exploration and exploitation phases. The ANA algorithm is benchmarked on 26 well-known test functions, and the results are verified by a comparative study with genetic algorithm (GA), particle swarm optimization (PSO), dragonfly algorithm (DA), five modified versions of PSO, whale optimization algorithm (WOA), salp swarm algorithm (SSA), and fitness dependent optimizer (FDO). ANA outperformances these prominent metaheuristic algorithms on several test cases and provides quite competitive results. Finally, the algorithm is employed for optimizing two well-known real-world engineering problems: antenna array design and frequency-modulated synthesis. The results on the engineering case studies demonstrate the proposed algorithm’s capability in optimizing real-world problems.

  • Research Article
  • Cite Count Icon 5
  • 10.7717/peerj-cs.1557
An improved hybrid whale optimization algorithm for global optimization and engineering design problems
  • Nov 9, 2023
  • PeerJ Computer Science
  • Abolfazl Rahimnejad + 5 more

The whale optimization algorithm (WOA) is a widely used metaheuristic optimization approach with applications in various scientific and industrial domains. However, WOA has a limitation of relying solely on the best solution to guide the population in subsequent iterations, overlooking the valuable information embedded in other candidate solutions. To address this limitation, we propose a novel and improved variant called Pbest-guided differential WOA (PDWOA). PDWOA combines the strengths of WOA, particle swarm optimizer (PSO), and differential evolution (DE) algorithms to overcome these shortcomings. In this study, we conduct a comprehensive evaluation of the proposed PDWOA algorithm on both benchmark and real-world optimization problems. The benchmark tests comprise 30-dimensional functions from CEC 2014 Test Functions, while the real-world problems include pressure vessel optimal design, tension/compression spring optimal design, and welded beam optimal design. We present the simulation results, including the outcomes of non-parametric statistical tests including the Wilcoxon signed-rank test and the Friedman test, which validate the performance improvements achieved by PDWOA over other algorithms. The results of our evaluation demonstrate the superiority of PDWOA compared to recent methods, including the original WOA. These findings provide valuable insights into the effectiveness of the proposed hybrid WOA algorithm. Furthermore, we offer recommendations for future research to further enhance its performance and open new avenues for exploration in the field of optimization algorithms. The MATLAB Codes of FISA are publicly available at https://github.com/ebrahimakbary/PDWOA.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 99
  • 10.1155/2020/4854895
Cat Swarm Optimization Algorithm: A Survey and Performance Evaluation.
  • Jan 22, 2020
  • Computational Intelligence and Neuroscience
  • Aram M Ahmed + 2 more

This paper presents an in-depth survey and performance evaluation of cat swarm optimization (CSO) algorithm. CSO is a robust and powerful metaheuristic swarm-based optimization approach that has received very positive feedback since its emergence. It has been tackling many optimization problems, and many variants of it have been introduced. However, the literature lacks a detailed survey or a performance evaluation in this regard. Therefore, this paper is an attempt to review all these works, including its developments and applications, and group them accordingly. In addition, CSO is tested on 23 classical benchmark functions and 10 modern benchmark functions (CEC 2019). The results are then compared against three novel and powerful optimization algorithms, namely, dragonfly algorithm (DA), butterfly optimization algorithm (BOA), and fitness dependent optimizer (FDO). These algorithms are then ranked according to Friedman test, and the results show that CSO ranks first on the whole. Finally, statistical approaches are employed to further confirm the outperformance of CSO algorithm.

  • Research Article
  • 10.21917/ijsc.2025.0540
AN ADAPTIVE PATTERN-DRIVEN OPTIMIZATION - TAILOR-INSPIRED METAHEURISTIC FOR SOLVING CONSTRAINED REAL-WORLD OPTIMIZATION PROBLEMS
  • Jul 1, 2025
  • ICTACT Journal on Soft Computing
  • Karthik Chandran + 1 more

Real-world optimization problems in engineering, logistics, and resource allocation are often constrained and multi-modal, posing a challenge for traditional optimization algorithms. Metaheuristic algorithms inspired by natural and artificial phenomena have shown promise, but many fail to balance exploration and exploitation effectively, especially under stringent constraints. Existing algorithms such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO), and Differential Evolution (DE) face issues in convergence speed and constraint handling, particularly in high-dimensional spaces or when constraints are dynamic or complex. We propose an Adaptive Pattern-Driven Optimization (APDO) algorithm, a novel tailor-inspired metaheuristic that mimics the adaptive decision-making process of a tailor designing garments. APDO integrates three primary operators—Pattern Selection, Fabric Adjustment, and Stitch Reinforcement—to handle constraints adaptively. The algorithm combines pattern memory (historical bests), probabilistic pattern mutation, and a constraint-domination principle to ensure feasibility and diversity. The core idea is to iteratively “cut and stitch” solutions to adapt the search process, enabling dynamic constraint satisfaction and global optimization. We benchmarked APDO against five popular methods (GA, PSO, DE, Firefly Algorithm, and Whale Optimization Algorithm) on a suite of 10 real-world constrained problems, including mechanical component design and energy scheduling tasks. APDO outperformed all baselines in terms of convergence speed, constraint satisfaction rate, and solution quality. In particular, APDO achieved an average feasibility rate of 97.6% and an improvement of 4.2–11.8% in best fitness across problems.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon