Abstract

New programming models have been introduced to aid the programmer dealing with the complexity of large-scale systems, simplifying the coding process and making applications more scalable regardless. Task-based programming is one example that became popular recently. At the same time, understanding performance of multicore systems is crucial for getting faster execution times and to optimize efficiency, but it is becoming harder due to the increased complexity of hardware architectures, and the interplay between the scheduling of tasks and caches. In this work, we develop models to understand how scheduling affects the performance of tasks due to memory behavior in the task-based context, and for that, we study cache sharing both in temporal and spatial ways. In temporal cache sharing, the effect of data reused over time by the tasks executed is modeled to predict different scenarios resulting in a tool called StatTask. In spatial cache sharing, the effect of tasks fighting for the cache at a given point in time through their execution is quantified and used to model their behavior on arbitrary cache sizes. We also present a new methodology called TaskInsight that can explain performance differences across different schedules for the same application. Finally, we explain how these methods set up a unique and solid platform to reveal insight into how to improve the performance of the execution of large-scale task-based applications.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.