Nature-inspired metaheuristic algorithms, like Particle Swarm Optimization (PSO), are powerful general-purpose optimization tools but they invariably do not come with rigorous theoretical justifications and can fail to find a global optimum. By treating PSO as a random search optimization process and repairing the famous Global Search Convergence Theorem by imposing an additional condition in the proof, we create a novel theory-based algorithm called the Double Exponential Particle Swarm optimization algorithm (DExPSO) that converges to a global optimum. In particular, we show the common practice of using uniform variates as stochastic components in PSO and related algorithms does not satisfy the sufficient conditions in DExPSO and hence may provide a reason why PSO and other nature-inspired algorithms, like QPSO, LcRiPSO, and CSO can fail to converge. Additionally, in more complicated design problems, we show that DExpSO tends to converge to the support points of the optimal design more frequently and faster than PSO and its variants do. Moreover, there is a possibility to modify other PSO variants to DExPSO variants, and such hybridization offers promising improvement in the quality of the global search. Our applications include finding designs that minimize the integrated mean squared prediction error and locally D-optimal exact designs for a 68-compartmental model to assess radioactive particles retained in the human lung after exposure. Because PSO, and more generally, metaheuristics are used across disciplines, including ecology, pharmacokinetics and pharmacodynamics studies, agriculture, engineering, and computer science, there are potentially broad and deep implications of our results.
Read full abstract