Multi-Mode Virtualization for Soft Real-Time Systems

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Real-time virtualization is an emerging technology for embedded systems integration and latency-sensitive cloud applications. Earlier real-time virtualization platforms require offline configuration of the scheduling parameters of virtual machines (VMs) based on their worst-case workloads, but this static approach results in pessimistic resource allocation when the workloads in the VMs change dynamically. Here, we present Multi-Mode-Xen (M2-Xen), a real-time virtualization platform for dynamic real-time systems where VMs can operate in modes with different CPU resource requirements at run-time. M2-Xen has three salient capabilities: (1) dynamic allocation of CPU resources among VMs in response to their mode changes, (2) overload avoidance at both the VM and host levels during mode transitions, and (3) fast mode transitions between different modes. M2-Xen has been implemented within Xen 4.8 using the real-time deferrable server (RTDS) scheduler. Experimental results show that M2-Xen maintains real-time performance in different modes, avoids overload during mode changes, and performs fast mode transitions.

Similar Papers
  • Research Article
  • Cite Count Icon 32
  • 10.1016/j.comcom.2023.06.018
Enhanced resource allocation in distributed cloud using fuzzy meta-heuristics optimization
  • Jun 24, 2023
  • Computer Communications
  • Arun Kumar Sangaiah + 4 more

Enhanced resource allocation in distributed cloud using fuzzy meta-heuristics optimization

  • Conference Article
  • Cite Count Icon 6
  • 10.1109/ipact.2017.8244944
Autonomic and energy-aware resource allocation for efficient management of cloud data centre
  • Apr 1, 2017
  • Madhukar Shelar + 3 more

Server Virtualization is the key technology used in cloud data centers. In this technique, number of Virtual Machines (VM) can simultaneously run on the top of a single Physical Machine (PM) or host server. Each VM hosts guest operating system, middleware software and applications. There are various dimensions of resources available in PM such as CPU cores, memory, network bandwidth and storage space. As per the requirement of applications deployed, VM allocates resources from pool of resources available in a PM or host server. The placement of VMs into appropriate PM and as the need arises migrate them among other PMs by achieving application performance and saving energy are the key issues of this research paper. The performance of applications is improved by reducing frequency of live VM migrations among PMs and energy is saved by minimizing number of active servers in a data centre. The proposed approach presents algorithm for resource allocation in cloud data centre by considering various factors such as major resource requirement during initial setup of virtual machines, dynamic resource allocation at peak load on applications, performance of applications and power saving in a data centre. A data centre is simulated with heterogeneous servers by assigning load of randomly virtual machines containing CPU as well as memory intensive applications. The power consumption and VM placement failure rate are considered as parameters for analyzing the proposed algorithm. The experimental results of proposed algorithm for initial placement of VMs are compared with various algorithms such as first fit, best fit and random selection of PMs. In addition to the initial placement of VM in appropriate PM, the research issue of dynamic resource management in a data centre is also addressed.

  • Conference Article
  • 10.14257/astl.2014.63.02
WARS: A workload-aware CPU resources scheduling for the cloud computing environment
  • Oct 26, 2014
  • Chao Shen + 2 more

Virtualization-based cloud computing platforms allow mul- tiple virtual machines (VMs) running on the same physical machine. Efficient allocation of limited underlying resources has been a key is- sue.This paper presents a workload-aware CPU resources scheduling method (WARS). WARS uses the allocated credits and consumed cred- its to diagnose the CPU resources requirements of VMs and dynamical- ly adjusts CPU resources according to the requirements of VMs. The adjustment of CPU resources is converted into increased or decreased weights of VMs.

  • Research Article
  • Cite Count Icon 1
  • 10.1142/s0218843024500059
Deep Learning Modified Reinforcement Learning with Virtual Machine Consolidation for Energy-Efficient Resource Allocation in Cloud Computing
  • Mar 15, 2024
  • International Journal of Cooperative Information Systems
  • Chiranjit Dutta + 5 more

Cloud computing has attracted significant attention because of the growing service demands of businesses that outsource computationally intensive tasks to the data center. Meanwhile, the infrastructure of a data center is comprised of hardware resources that consume a great deal of energy and release harmful levels of carbon dioxide. Cloud data centers demand massive amounts of electrical power as modern applications and organizations grow. To prevent resource waste and promote energy efficiency, virtual machines (VMs) must be dispersed over numerous physical machines (PMs) in a data center in the cloud. The actual allocation of VMs to PMs can involve more complex decision-making processes, such as considering the resource utilization, load balancing, performance requirements, and constraints of the system. Advanced techniques, like intelligent placement algorithms or dynamic resource allocation, may be employed to optimize resource utilization and achieve efficient VM distribution across multiple PMs. Cloud service suppliers aim to lower operational expenses by reducing energy consumption while offering clients competitive services. Minimizing large-scale data center power usage while maintaining the quality of service (QoS), especially for social media-based cloud computing systems, is crucial. Consolidating VMs has been highlighted as a promising method for improving resource efficiency and saving energy in data centers. This research provides deep learning augmented reinforcement learning (RL)-based energy efficient and QoS-aware virtual machine consolidation (VMC) approach to meet the difficulties. The proposed deep learning modified reinforcement learning-virtual machine consolidation (DLMRL-VMC) model can motivate both cloud providers and customers to distribute cloud infrastructure resources to achieve high CPU utilization and good energy efficiency as measured by power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE). The suggested model, DLMRL-VMC, offers a VM placement approach based on resource usage and dynamic energy consumption to determine the best-matched host and VM selection strategy, Average Utilization Migration Time (AUMT). Based on AUMT, deep learning modified reinforcement learning (DLMRL) will choose a VM with a low average CPU utilization and a short migration time. The DLMRL-VMC Energy-efficient, Resource Allocation strategy is evaluated on the trace of the CloudSim VM to attain good PUE and CPU utilization.

  • Conference Article
  • Cite Count Icon 1
  • 10.1109/iccac.2015.40
A CPU Overhead-Aware VM Placement Algorithm for Network Bandwidth Guarantee in Virtualized Data Centers
  • Sep 1, 2015
  • Kwonyong Lee + 1 more

As server consolidations based on the virtualization techniques become popular and cloud services continue to grow rapidly, more and more data centers are being built to accommodate a number of virtual clusters running various workloads. Since these virtual clusters often share the resources provided by physical machines (PMs), it is more likely that the interferences between virtual machines (VMs) affect the performance of applications running on top of the virtual clusters. While a lot of studies have proposed different virtual machine placement algorithms to investigate this issue, the problem caused by network performance variability still remains as a challenging issue. Since they usually ignore the CPU overhead to process the communications between VMs, the network bandwidth allocated to a VM cannot be fully utilized when a PM has not enough CPU resources to cover the CPU overhead for VM networking functions. This results in unpredictable application performance running on the virtual clusters. This paper proposes a virtual machine placement algorithm that considers the CPU overhead incurred to reserve network bandwidth in a virtualized data center environment. In order to decide the CPU overhead necessary to guarantee the network bandwidth allocated to a VM, a performance model based on standard linear regression using the data collected from a real environment is used. By comparing the amount of CPU resource available in the driver domain with the CPU overhead obtained from the performance model, the proposed algorithm decides whether the network bandwidth for the VM can be provided or not and selects an appropriate location for VM placement. The benchmarking results show that the proposed algorithm guarantees the network bandwidth allocated to each VM without violations when the CPU resources are shared by multiple VMs.

  • Research Article
  • Cite Count Icon 1
  • 10.14257/ijgdc.2015.8.1.13
A Formal Method of CPU Resources Scheduling in the Cloud Computing Environment
  • Feb 28, 2015
  • International Journal of Grid and Distributed Computing
  • Xiaodong Liu + 2 more

In the virtualization based cloud computing environment, multiple computers are allowed to run as virtual machines (VM) in a single physical computer. Efficient scheduling of limited underlying resources has been a key issue. This paper presents a formal method of CPU resources scheduling (FRS). VMs are divided into three resources statuses according to resources requirements and their run information. FRS scheduling is formally scheduling CPU resources according to the resources statuses. The implementation of FRS is confined to the VMM layer, without VM dependency. The evaluation shows that idle CPU resources of VMs are be used by those VMs which need more CPU resources and the CPU resources overall utilization is improved.

  • Research Article
  • Cite Count Icon 28
  • 10.1145/3092946
Predictable Shared Cache Management for Multi-Core Real-Time Virtualization
  • Dec 6, 2017
  • ACM Transactions on Embedded Computing Systems
  • Hyoseung Kim + 1 more

Real-time virtualization has gained much attention for the consolidation of multiple real-time systems onto a single hardware platform while ensuring timing predictability. However, a shared last-level cache (LLC) on modern multi-core platforms can easily hamper the timing predictability of real-time virtualization due to the resulting temporal interference among consolidated workloads. Since such interference caused by the LLC is highly variable and may have not even existed in legacy systems to be consolidated, it poses a significant challenge for real-time virtualization. In this article, we propose a predictable shared cache management framework for multi-core real-time virtualization. Our framework introduces two hypervisor-level techniques, vLLC and vColoring, that enable the cache allocation of individual tasks running in a virtual machine (VM), which is not achievable by the current state of the art. Our framework also provides a cache management scheme that determines cache allocation to tasks, designs VMs in a cache-aware manner, and minimizes the aggregated utilization of VMs to be consolidated. As a proof of concept, we implemented vLLC and vColoring in the KVM hypervisor running on x86 and ARM multi-core platforms. Experimental results with three different guest OSs (i.e., Linux/RK, vanilla Linux, and MS Windows Embedded) show that our techniques can effectively control the cache allocation of tasks in VMs. Our cache management scheme yields a significant utilization benefit compared to other approaches while satisfying timing constraints.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.comnet.2023.110141
Unavailability-aware allocation of backup resources considering failures of virtual and physical machines
  • Dec 15, 2023
  • Computer Networks
  • Nozomi Kita + 1 more

Unavailability-aware allocation of backup resources considering failures of virtual and physical machines

  • Research Article
  • Cite Count Icon 7
  • 10.14257/ijgdc.2015.8.1.23
A Workload-aware Resources Scheduling Method for Virtual Machine
  • Feb 28, 2015
  • International Journal of Grid and Distributed Computing
  • Hongshan Qu + 2 more

Virtualization-based cloud computing platforms allow multiple virtual machines (VMs) running on the same physical machine. Efficient allocation of limited underlying resources has been a key issue. In order to improve the CPU resources utilization, this paper presents a workload-aware CPU resources scheduling method (WARS). WARS uses the allocated credits and consumed credits to diagnose the CPU resources requirements of VMs and dynamically adjusts CPU resources according to the requirements of VMs. The adjustment of CPU resources is converted into increased or decreased weights of VMs. The implementation of WARS is confined to the VMM layer, without VM dependency. Our evaluation shows that WARS can improve the overall utilization of CPU resources.

  • Research Article
  • Cite Count Icon 36
  • 10.1109/tcc.2014.2360399
Planning vs. Dynamic Control: Resource Allocation in Corporate Clouds
  • Jul 1, 2016
  • IEEE Transactions on Cloud Computing
  • Andreas Wolke + 2 more

Nowadays corporate data centers leverage virtualization technology to cut operational and management costs. Virtualization allows splitting and assigning physical servers to virtual machines (VM) that run particular business applications. This has led to a new stream in the capacity planning literature dealing with the problem of assigning VMs with volatile demands to physical servers in a static way such that energy costs are minimized. Live migration technology allows for dynamic resource allocation, where a controller responds to overload or underload on a server during runtime and reallocates VMs in order to maximize energy efficiency. Dynamic resource allocation is often seen as the most efficient means to allocate hardware resources in a data center. Unfortunately, there is hardly any experimental evidence for this claim. In this paper, we provide the results of an extensive experimental analysis of both capacity management approaches on a data center infrastructure. We show that with typical workloads of transactional business applications dynamic resource allocation does not increase energy efficiency over the static allocation of VMs to servers and can even come at a cost, because migrations lead to overheads and service disruptions.

  • Conference Article
  • Cite Count Icon 41
  • 10.1109/cloud.2015.33
RT-Open Stack: CPU Resource Management for Real-Time Cloud Computing
  • Jun 1, 2015
  • Sisu Xi + 7 more

Clouds have become appealing platforms for not only general-purpose applications, but also real-time ones. However, current clouds cannot provide real-time performance to virtual machines (VMs). We observe the demand and the advantage of co-hosting real-time (RT) VMs with non-real-time (regular) VMs in a same cloud. RT VMs can benefit from the easily deployed, elastic resource provisioning provided by the cloud, while regular VMs effectively utilize remaining resources without affecting the performance of RT VMs through proper resource management at both the cloud and the hyper visor levels. This paper presents RT-Open Stack, a cloud CPU resource management system for co-hosting real-time and regular VMs. RT-Open Stack entails three main contributions: (1) integration of a real-time hyper visor (RT-Xen) and a cloud management system (Open Stack) through a real-time resource interface, (2) a real-time VM scheduler to allow regular VMs to share hosts with RT VMs without interfering the real-time performance of RT VMs, and (3) a VM-to-host mapping strategy that provisions real-time performance to RT VMs while allowing effective resource sharing with regular VMs. Experimental results demonstrate that RT-Open Stack can effectively improve the real-time performance of RT VMs while allowing regular VMs to fully utilize the remaining CPU resources.

  • Conference Article
  • Cite Count Icon 7
  • 10.1145/3447786.3456232
Virtual machine preserving host updates for zero day patching in public cloud
  • Apr 21, 2021
  • Mark Russinovich + 5 more

Host software updates are critical to ensure the security, reliability and compliance of public clouds. Many updates require a virtualization component restart or operating system reboot. Virtual machines (VMs) running on the updated servers must either be restarted or live migrated off. Reboots can result in downtime for the VMs on the order of ten minutes, and has further impact on the workloads running in the VMs because cached state is lost. Live migration (LM) is a technology that can avoid the need to shutdown VMs. However, LM requires turn space in the form of already-patched hosts, consumes network, CPU and other resources that scale with the amount of and level of activity of VM, and has variable impact on VM performance and availability, making it too expensive and disruptive for zero-day security updates that must be applied across an entire fleet on the order of hours. We present a novel update technology, virtual machine preserving host updates (VM-PHU), that does not require turn space, consumes no network and little CPU, preserves VM state, and causes minimal VM blackout time that does not scale with VM resource usage. VM-PHU persists the memory and device state of all running guest VMs, reboots the host and virtualization components into updated code, restores the state of the VMs, and then resumes them. VM-PHU makes use of several techniques to minimize VM blackout time. One is to use kernel soft reboot (KSR) to directly transition to an updated host operating system, bypassing firmware reset of the server and attached devices. To minimize resource consumption and VM disruption, VM-PHU leaves VM memory in physical memory pages and other state in persisted pages across the soft reboot, and VM-PHU implements a mechanism called fast close to enable a reboot to proceed without waiting for the completion of in-flight VM I/Os to remote storage devices. We have implemented VM-PHU in Microsoft Azure hosting millions of servers and show results of several zero-day updates that demonstrate VM blackout times on the order of seconds. VM-PHU provides significant benefits to both customers and public cloud vendors by minimizing application downtime while enabling fast and resource efficient updates, including zero-day patches.

  • Research Article
  • 10.18127/j19997493-202403-05
A method for assessing the characteristics of the virtual machine migration process, taking into account the type of migrations and applications
  • Mar 3, 2024
  • Dynamics of Complex Systems - XXI century
  • A.V Toutov + 2 more

Dynamic resource allocation in cloud computing is a critical aspect. It is necessary for computing resources scaling to maintain application performance, reduce costs and ensure the reliability of IT systems. Dynamic resource allocation is possible using open virtual machines (VM), but live migration is a relatively expensive and resource-intensive operation. A number of VM migration algorithms have been proposed, each with different performance characteristics depending on the state of the host system, network, and the VM workload. Recently, a large number of works have emerged that have developed models for assessing the effectiveness of of VM migration, but many of them do not provide a standard of satisfactory prediction accuracy and are built for a single migration algorithm, which limits the use of data models. This work uses a statistical approach to estimate the total migration time and VM downtime based on the approximation of probability densities by Gram-Charlier and Laguerre Series. The proposed method, in comparison with the known ones, allows one to answer the question of what is the probability of total migration time and VM downtime for a given migration type and VM application. The analysis of total migration time and VM downtime is based on the Virtual Machine Live Migration Dataset includes more than 40 000 virtual machine migration records with five different live migration algorithms: post-copy migration, pre-copy migration and its modifications: CPU throttling delta compression, and data compression. The dataset includes approximately 8000 migration records of each migration algorithm with 9 types of workloads. Analysis of the results allows us to conclude that the type of application significantly influences both the shape of the empirical distribution (histograms) of total migration time and its characteristics, therefore it is advisable to obtain an analytical expression for the distribution law of the total migration time and VM downtime taking into account these circumstances. The results of the experiments shows that this method does not depend on the migration algorithm and the type of working application. The use of the Laguerre series can be recommended as giving more reliable results compared to Gram-Charlier series. A method for estimating total migration time and VM downtime can be integrated into a management system to select the best monitoring window for servers and evaluate service-level agreement terms.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 13
  • 10.1155/2022/7873131
Virtual Machine Resource Allocation Optimization in Cloud Computing Based on Multiobjective Genetic Algorithm
  • Mar 10, 2022
  • Computational Intelligence and Neuroscience
  • Feng Shi + 1 more

Cloud computing is an important milestone in the development of distributed computing as a commercial implementation, and it has good prospects. Infrastructure as a service (IaaS) is an important service mode in cloud computing. It combines massive resources scattered in different spaces into a unified resource pool by means of virtualization technology, facilitating the unified management and use of resources. In IaaS mode, all resources are provided in the form of virtual machines (VM). To achieve efficient resource utilization, reduce users' costs, and save users' computing time, VM allocation must be optimized. This paper proposes a new multiobjective optimization method of dynamic resource allocation for multivirtual machine distribution stability. Combining the current state and future predicted data of each application load, the cost of virtual machine relocation and the stability of new virtual machine placement state are considered comprehensively. A multiobjective optimization genetic algorithm (MOGANS) was designed to solve the problem. The simulation results show that compared with the genetic algorithm (GA-NN) for energy saving and multivirtual machine redistribution overhead, the virtual machine distribution method obtained by MOGANS has a longer stability time. Aiming at this shortage, this paper proposes a multiobjective optimization dynamic resource allocation method (MOGA-C) based on MOEA/D for virtual machine distribution. It is illustrated by experimental simulation that moGA-D can converge faster and obtain similar multiobjective optimization results at the same calculation scale.

  • Research Article
  • 10.52783/jes.1127
Optimizing Task Scheduling in Cloud Data Centres with Dynamic Resource Allocation Using Genetic Algorithm (TSOGA)
  • Apr 4, 2024
  • Journal of Electrical Systems
  • S Alangaram, S P Balakannan

Nowadays, Massive business applications are increasingly giving attention to cloud computing data centres because of its high potential, adaptability, and efficiency in supplying several sources of both software and hardware to support networked consumers. The criteria for autonomy of virtual machines necessitate a flexible resource allocation strategy for Virtual Machines (VMs) .The majority of resource utilization models were inaccurate, making it impossible to determine the virtual machine's energy usage directly from the hardware. Due to the size of modern data centres and the constantly changing character of their resource supply, efficient scheduling solutions must be developed to oversee these resources and meet the objectives of both cloud service providers and cloud customers. Hence an algorithm called Task Scheduling Optimization based Genetic Algorithm (TSOGA) has been proposed to dynamically allocate the resources in pursuit of scheduling the tasks in cloud data centers. The proposed module initially focuses on task scheduling process, followed by optimized running time of task execution. For data centres with dynamic resource allocation, the goal of TSOGA is to efficiently assign jobs to resources while minimizing execution time and optimizing resource utilization. Thus, to manage the data centres while achieving high levels of efficiency in resource allocation, we constructed a virtual node for our research. Incorporation of Genetic Algorithm is to determine an ideal or nearly ideal schedule for carrying out tasks using the available resources while taking into account a variety of restrictions and goals, such as minimizing execution and waiting time of task during dynamic scheduling process and efficient resource utilization.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.