Abstract
Many parallel applications do not scale as the number of threads increases, which means that executing them with the maximum possible number of threads will not always deliver the best outcome in performance, energy consumption, or the tradeoff between both (represented by the energy-delay product- EDP). Given that, several strategies, online and offline, have already been proposed to rightly tune the number of threads according to the application. While the former can capture some behaviors that can only be known at runtime, the latter do not impose any execution overhead and can use more efficient and costly algorithms. However, these learning algorithms in static strategics may take several hours, precluding their use or a smooth migration across different systems. In this scenario, we propose a generic methodology for such offline strategies to significantly decrease the learning time by inferring the execution behavior of parallel applications using smaller input sets than the ones used by the target applications. Through the execution of eighteen well-known benchmarks on two multicore processors, we show that our methodology is capable of converging to results that are very close to those that use the regular input set, but converging 84.7% faster, on average. We also show that such a strategy delivers better results than a dynamic one, presenting an EDP 7.7% lower, on average, when executing the applications with the number of threads found during learning. Finally, we also compare our learning methodology with an exhaustive search. It has an average learning cost (i.e., the time spent by our search algorithm to find the best configuration) of only 3.1% to optimize the EDP of the entire benchmark set1.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.