Abstract
In this paper, we explore parallel implementations of the abstract data type priority queue. We use the BSP* model, an extension of Valiant's BSP model which rewards blockwise communication, i.e. sending a few large messages instead of many small ones. We present two randomized approaches for different relations between the size of the data structure and the number of parallel updates to be performed. Both yield work optimal algorithms that need asymptotically less communication than computation time and use large messages. All previous work optimal algorithms need asymptotically as much communication as computation or do not consider blockwise communication. We use a work optimal randomized selection algorithm as a building block. This might be of independent interest. It uses less communication than computation time, if the keys are distributed at random. A similar selection algorithm was independently developed by Gerbessiotis and Siniolakis for the standard BSP model. We improve upon previous work by both reducing the amount of communication and by using large messages.KeywordsSelection AlgorithmPriority QueueCommunication TimeLarge MessageBlock DistributionThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.