Abstract

We propose a method for job migration policies by considering effective usage of global memory in addition to CPU load sharing in distributed systems. When a node is identified for lacking sufficient memory space to serve jobs, one or more jobs of the node will be migrated to remote nodes with low memory allocations. If the memory space is sufficiently large, the jobs will be scheduled by a CPU‐based load sharing policy. Following the principle of sharing both CPU and memory resources, we present several load sharing alternatives. Our objective is to reduce the number of page faults caused by unbalanced memory allocations for jobs among distributed nodes, so that overall performance of a distributed system can be significantly improved. We have conducted trace‐driven simulations to compare CPU‐based load sharing policies with our policies. We show that our load sharing policies not only improve performance of memory bound jobs, but also maintain the same load sharing quality as the CPU‐based policies for CPU‐bound jobs. Regarding remote execution and preemptive migration strategies, our experiments indicate that a strategy selection in load sharing is dependent on the amount of memory demand of jobs, remote execution is more effective for memory‐bound jobs, and preemptive migration is more effective for CPU‐bound jobs. Our CPU‐memory‐based policy using either high performance or high throughput approach and using the remote execution strategy performs the best for both CPU‐bound and memory‐bound job in homogeneous networks of distributed environment.

Highlights

  • A major performance objective of implementing a load sharing policy in a distributed system is to minimize execution time of each individual job, and to maximize the system throughput by effectively using the distributed resources, such as CPUs, memory modules, and I/Os

  • We believe that the overheads of data accesses and movement, such as page faults, have grown to the point where the overall performance of distributed systems would be considerably degraded without serious considerations concerning memory resources in the design of load sharing policies

  • When a job migration is necessary, the migration can be either a remote execution, which makes jobs be executed on remote nodes in a nonpreemptive way, or a preemptive migration, which may make the selected jobs be suspended, be moved to a remote node, and be restarted

Read more

Summary

Introduction

A major performance objective of implementing a load sharing policy in a distributed system is to minimize execution time of each individual job, and to maximize the system throughput by effectively using the distributed resources, such as CPUs, memory modules, and I/Os. Most load sharing schemes (e.g., [1,2,3,4,5]) mainly consider CPU load balancing by assuming that each computer node in the system has a sufficient amount of memory space These schemes have proved to be effective on overall performance improvement of distributed systems. We believe that the overheads of data accesses and movement, such as page faults, have grown to the point where the overall performance of distributed systems would be considerably degraded without serious considerations concerning memory resources in the design of load sharing policies. Our objective of a new load sharing policy design is to share both CPU and memory services among the nodes in order to minimize both CPU idle times and the number of page faults in distributed systems.

Objectives
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.