Abstract

We explore techniques for designing nonblocking algorithms that do not require advance knowledge of the number of threads that participate, whose time complexity and space consumption both adapt to various measures (rather than being based on predefined worst-case scenarios), and that can continue to reclaim memory even after thread failures. The techniques we introduce can be implemented using widely available hardware synchronization primitives. We present our techniques in the context of solutions to the well-known Collect problem. We also explain how our techniques can be exploited to achieve other results with similar properties, such as long-lived renaming and dynamic memory management for nonblocking data structures. In addition to the algorithmic techniques we introduce, we also clarify and generalize previous properties used to characterize measures of an algorithm's “adaptivity”.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call