Abstract

Big data analysis requires the speedup of parallel computing. However, in the virtualized systems, the power of parallel computing is not fully exploited due to the limit of current VMM schedulers. Xen, one of the most popular virtualization platforms, has been widely used by industry to host parallel job. In practice, the virtualized systems are expected to accommodate both parallel jobs and serial jobs, and resource contention between virtual machines results in severe performance degradation of the parallel jobs. Moreover, the physical resource is vastly wasted during the communication process due to the ineffective scheduling of parallel jobs. Unfortunately, the existing schedulers of Xen are initially targeting at serial jobs, which are not capable of correctly scheduling the parallel jobs. This paper presents vChecker, an application-level co-scheduler which mitigates the performance degradation of the parallel job and optimizes the utilization of the hardware resource. Our co-scheduler takes number of available CPU cores in one hand, and satisfies need of the parallel jobs in other hand, which helps the credit scheduler of Xen to appropriately schedule the parallel job. As our co-scheduler is implemented at application level, no modifications on the hypervisor is required. The experimental result shows that the vChecker optimizes the performance of the parallel job in Xen and enhances the utilization of the system.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.