Abstract

Abstract Early distributed shared memory systems used the shared virtual memory approach with fixed-size pages, usually 1–8 KB. As this does not match the variable granularity of sharing of most programs, recently the emphasis has shifted to distributed object-oriented systems. With small object sizes, the overhead of inter-process communication could be large enough to make a distributed program too inefficient for practical use. To support research in this area, we have implemented a user-level distributed programming testbed, DIPC, that provides shared memory, semaphores and barriers. We develop a computationally-efficient model of distributed shared memory using approximate queueing network techniques. The model can accommodate several algorithms including central server, migration and read-replication. These models have been carefully validated against measurements on our distributed shared memory testbed. Results indicate that for large granularities of sharing and small access bursts, central server performs better than both migration and read-replication algorithms. Read-replication performs better than migration for small and moderate object sizes for applications with high degree of read-sharing and migration performs better than read-replication for large object sizes for applications having moderate degree of read-sharing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.