Abstract

Hyperheuristics based on parameterized metaheuristic schemas are computationally demanding. To reduce execution times, a shared-memory schema of hyperheuristics is used, with four levels of parallelism, with two being selected for the hyperheuristic and two for the metaheuristics. The parallel schema is executed in a many-core system in native mode, and the four-level parallelism allows us to take full advantage of the massive parallelism offered by this architecture. An auto-tuning methodology is used to select the number of threads used at each level. A theoretical model of the execution time of the parameterized metaheuristic schema is developed, and the model is adapted to a particular metaheuristic by experimentation. The massive parallelism in a many-core system can help to obtain satisfactory fitness and an important reduction in execution times, for which the four-levels parallelism schema is useful, and the auto-tuning engine facilitates the optimum selection of the number of threads at each level. The best results are obtained with a relatively low number of threads distributed among the four levels of parallelism between the hyper and metaheuristics.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.