Abstract

In order to improve the ability to optimize artificial immune algorithm, the memory mechanism of non-genetic information is introduced into optimization algorithm. An immune memory optimization algorithm based on the non-genetic information is proposed. Emulating human society education and experiential inheritance mechanism, the algorithm takes, stores and uses non genetic information in the evolutionary process of the population. By setting up a separate memory base, the algorithm stores non genetic information, and guides the subsequent search process. The algorithm uses the short-term memory of the prior knowledge and guides the subsequent evolution, which can increase the intelligence of search and reduce the blind search and repeat the search. The immune memory optimization algorithm based on the non-genetic information includes key operators: mutation operator, crossover operator and complement operator. The mutation operator is able to efficiently use non genetic information of grandparents to search, which can speed up the local search efficiency. In addition, the threshold to control the search depth of single dimension can avoid falling into local optimal solution making the evolutionary standstill. Through calculating comprehensive information about contemporary populations of all antibodies, complementary operator produces new antibodies containing excellent gene fragment in the global solution space. With small probability rules, crossover operator happens in an interval of multi generation, choosing the optimal antibody and a random antibody to exchange information about a single dimension. Crossover operator and complement operator can both be conducive to jumping out of optimal location. In simulation experiment, the immune memory optimization algorithm based on the non-genetic information uses four standard test functions: Ackley function, Griewank function, Rastrigin function, and transformed Rastrigin function. In order to better compare with contrast algorithm, in the case of high dimension the values of dimension are 20 and 30, and the experiment tests the four functions to make the statistical analysis of the results. On the other hand, to further test optimal performance of the algorithm in a more global massive space, multiple random experiment is carried out in the case of dimension 100. Compared with other intelligent algorithm, the simulation experiment with standard test functions of high dimension indicates that the new algorithms are superior in convergence speed, convergence precision and robustness comparison algorithm. In addition, the simulation results in the super high dimension show that the new algorithm has the global searching ability in high-dimensional solution space.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call