Abstract
The main problem for the design of dictionary machines on coarse grained hypercube multiprocessors, in comparison to the widely studied dictionary problem for fine grained hypercube multiprocessors, is that due to unequal distribution of the inserted and deleted records, the sizes of the sets stored at the individual processors may vary considerably. This problem, which is usually referred to as the load balancing problem, may lead to considerable degradation of the dictionary machine's performance. In this note we show that the load balancing problem for coarse grained hypercube dictionary machines can be solved with provable bounds on the sizes of the data sets, and with only little computational overhead.KeywordsComputational OverheadArbitrary TreeConsiderable DegradationLoad Balance ProblemApproximate CountingThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.