Abstract

There are dramatic increases in the threats from out-of-control diseases such as cancer. The principle solution is to obtain a good prediction of the dynamic behavior of the underlying systems to control the systems. An emergent challenge is to develop more effective and efficient reverse engineering technologies. In this study, we propose a smarten-up differential evolution (sDE) and a heuristically-deviated local search (hLS) to solve this issue. Premature convergence and the insufficiency in exploitation for complex systems limit the potential of differential evolution to decipher time series data. Since the spirit of DE is on introducing individual differences as a directed searching deviation. We reinforce the evolutionary variation between the winner and other members and also in these members. The idea is implemented with succeeded exploiting searching (a united locally variant search rule for the best individual to achieve efficient exploitation quite rapidly), differential mutation (a more flexible mutation strategy to strengthen the differential evolution), and a flexible two-way migration. Additionally, the insufficiency in globally searching over a large range is a critical issue for various gradient-based methods. We here propose a heuristically-deviated scheme to allow the search to succeed at being widened (from a tangent to a region, to a large range and further to a pop-jumping deviation). Three diverting operations (population-toward, random-toward and popping-diverse differentiation) ensure that the move of hLS is a method for achieving a valid escape in a limited amount of time. Simulation tests for S-systems show that almost perfect results are obtained even when learning starts at a random poor point in a wide search space (>99.96% accuracy for a kinetic order range of [−100, 100] with 80-neighborhood starting points). A perfect prediction of Michaelis-Menten systems shows the potential of hLS in global-searching robustness. We have an additional discussion on long-period dense-sample, short-period sparse-sample and general-range cases for learning-range robustness (>99.97% average accuracy for 21 sample points), and propose a criterion for setting up a new experiment. These results demonstrate that both sDE and hLS are able to remain/achieve a diverse search and stay flexible in jumping from an attractor.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call