Abstract

The choice of synchronization primitive used to protect shared resources is a critical aspect of application performance and scalability, which has become extremely unpredictable with the rise of multicore machines. Neither of the most commonly used contention management strategies works well for all cases: spinning provides quick lock handoff and is attractive in an undersubscribed situation but wastes processor cycles in oversubscribed scenarios, whereas blocking saves processor resources and is preferred in oversubscribed cases but adds up to the critical path by lengthening the lock handoff phase. Hybrids, such as spin-then-block and spin-then-park, tackle this problem by switching between spinning and blocking depending on the contention level on the lock or the system load. Consequently, threads follow a fixed strategy and cannot learn and adapt to changes in system behavior. To this end, it is proposed to use principles of machine learning to formulate hybrid methods as a reinforcement learning problem that will overcome these limitations. In this way, threads can intelligently learn when they should spin or sleep. The challenges of the suggested technique and future work is also briefly discussed.

Highlights

  • While multicore architectures bring new opportunities for parallel software, they present certain challenges, such as the choice of contention management strategy, which is crucial for the performance and scalability of parallel applications

  • We show that system load cannot serve as the only criteria in sleeping decisions, as previous and recent research states

  • We show that a thread that follows either of the hybrid methods can be treated as an entity capable of learning optimal actions via interaction with the system

Read more

Summary

INTRODUCTION

While multicore architectures bring new opportunities for parallel software, they present certain challenges, such as the choice of contention management strategy, which is crucial for the performance and scalability of parallel applications. To remove scheduler interaction from the critical path, previous [14] and recent [15] research suggests to maintain system load and to park and wake up threads in bulk as load changes These works address two main problems: 1) whether a thread should spin or sleep, and 2) how a thread should make sleeping decisions. We show that a thread that follows either of the hybrid methods can be treated as an entity capable of learning optimal actions (spin or take a timed sleep) via interaction with the system We formulate both spin--block and spin--park strategies as a reinforcement learning (RL) problem, which allows a thread to 1) learn when it should spin or sleep 2) adapt its behavior to changes in the system and 3) utilize learned experience to future cases.

BACKGROUND
Hybrid Synchronization Primitives
RL-BASED HYBRID SYNCHRONIZATION METHODS
Formulation of Hybrid Primitives as a RL Problem
Solving Challenges of the Agent
DISCUSSIONS AND FUTURE WORK
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call