Active Data Technology in Virtual Machines with Dynamic Command System
Active Data Technology in Virtual Machines with Dynamic Command System
- Conference Article
5
- 10.1109/spro.2015.11
- May 1, 2015
The use of virtual machine technology has become a popular approach for defending software applications from attacks by adversaries that wish to compromise the integrity and confidentiality of an application. In addition to providing some inherent obfuscation of the execution of the software application, the use of virtual machine technology can make both static and dynamic analysis more difficult for the adversary. However, a major point of concern is the protection of the virtual machine itself. The major weakness is that the virtual machine presents a inviting target for the adversary. If an adversary can render the virtual machine ineffective, they can focus their energy and attention on the software application. One possible approach is to protect the virtual machine by composing or nesting virtualization layers to impart virtual machine protection techniques to the inner virtual machines "closest" to the software application. This paper explores the concept and feasibility of nested virtualization for software protection using a high-performance software dynamic translation system. Using two metrics for measuring the strength of protection, the preliminary results show that nesting virtual machines can strengthen protection of the software application. While the nesting of virtual machines does increase run-time overhead, initial results indicate that with careful application of the technique, run-time overhead could be reduced to reasonable levels.
- Research Article
1
- 10.5555/2821429.2821435
- May 16, 2015
The use of virtual machine technology has become a popular approach for defending software applications from attacks by adversaries that wish to compromise the integrity and confidentiality of an application. In addition to providing some inherent obfuscation of the execution of the software application, the use of virtual machine technology can make both static and dynamic analysis more difficult for the adversary. However, a major point of concern is the protection of the virtual machine itself. The major weakness is that the virtual machine presents a inviting target for the adversary. If an adversary can render the virtual machine ineffective, they can focus their energy and attention on the software application. One possible approach is to protect the virtual machine by composing or nesting virtualization layers to impart virtual machine protection techniques to the inner virtual machines closest to the software application. This paper explores the concept and feasibility of nested virtualization for software protection using a high-performance software dynamic translation system. Using two metrics for measuring the strength of protection, the preliminary results show that nesting virtual machines can strengthen protection of the software application. While the nesting of virtual machines does increase run-time overhead, initial results indicate that with careful application of the technique, run-time overhead could be reduced to reasonable levels.
- Single Report
- 10.21236/ada566834
- Jul 1, 2012
: The North American Aerospace Defense Command (NORAD) and United States Northern Command (USNORTHCOM) (N-NC) crew training has been hindered by an inability to conduct dynamic training and exercises on multiple Command and Control (C2) systems. There was no common simulation injector because of stove-piped acquisitions and legacy interfaces that were incompatible with the Live-Virtual-Constructive Toolkits. The N-NC Joint Training and Exercise Directorate could not afford traditional replication or emulation of all C2 systems and their data sources, nor the inevitable sustainment costs. This paper presents a cost-effective solution to provide dynamic scenario injection into multiple C2 systems: leveraging server and desktop virtualization technology described in previous I/ITSEC papers. The virtualization process transforms stand-alone systems into functionally equivalent virtual machines (VMs). Server virtualization technology lets multiple VMs run as guests on a single host, and a host can support VMs running different operating systems. This allows entire processing strings, distributed throughout North America, to be converted into VMs on a single server. Because the VMs inherit the fidelity of the actual processors, their outputs are as authentic as the operational systems. These VMs feed processed simulation event data into actual C2 systems or equivalent VMs. Future operational system upgrades can be virtualized and then replace existing VMs without changing this infrastructure. Desktop virtualization technology allows users to run multiple VMs in separate windows on a common display. NNC exploited desktop virtualization to simplify the trainee and model operator's workspaces. They can view and manage multiple VMs with one monitor, keyboard and mouse (controlling simulations, lower echelon processing and operator interaction, and viewing C2 workstation displays).
- Conference Article
- 10.1109/cluster51413.2022.00033
- Sep 1, 2022
Virtualized network has become the cornerstone of today's large-scale cloud data centers. In particular, the data plane of virtualized network, consisting of virtual switch, virtual router and other software network functionalities, performs all network packets processing of virtual machines (VMs). However, current virtualized data plane solutions incur drastic performance interference with co-resident VMs, and thus suffer from unpredictable network performance, especially in terms of tail latency. In this work, we show that the performance issue stems from the fact that CPU plays a dual role of both communication and computation in virtualized networks. A number of virtual network components and their complex packets processing create an undue burden on the hosts' CPUs and in turn cause the mutual performance interference among VMs and networks. To address this issue, we present a multipath data plane solution, where the traffic of VMs can be adaptively and seamlessly offloaded to the adjacent hosts. At the core of this design is to optimize the VM traffic allocation among multiple paths. We formulate the VM multipath traffic allocation problem with coupled variables of computing and network resources, which were only considered as mutually independent in prior researches. Then we present a distributed algorithm to efficiently solve the large-scale, interdependent global optimization problem, with convergence and optimality guarantees. Through extensive simulations and real-world testbed experiments, we show that our solution delivers consistent performance improvement (up to <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$6.7\times$</tex> improvement in aggregate throughput and <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$21.4\times$</tex> reduction in tail latency, respectively) in the dynamic cloud system.
- Conference Article
27
- 10.1109/cloud.2012.56
- Jun 1, 2012
In Infrastructure-as-a-Service datacenters, the placement of Virtual Machines (VMs) on physical hosts are dynamically optimized in response to resource utilization of the hosts. However, existing live migration techniques, used to move VMs between hosts, need to involve large data transfer and prevents dynamic consolidation systems from optimizing VM placements efficiently. In this paper, we propose a technique called reusing'' that reduces the amount of transferred memory of live migration. When a VM migrates to another host, the memory image of the VM is kept in the source host. When the VM migrates back to the original host later, the kept memory image will be reused'', i.e. memory pages which are identical to the kept pages will not be transferred. We implemented a system named MiyakoDori that uses memory reusing in live migrations. Evaluations show that MiyakoDori significantly reduced the amount of transferred memory of live migrations and reduced 87% of unnecessary energy consumption when integrated with our dynamic VM consolidation system.
- Book Chapter
39
- 10.1007/978-3-319-62238-5_6
- Sep 21, 2017
Virtual machine (VM) consolidation is one of the key mechanisms of designing an energy-efficient dynamic Cloud resource management system. It is based on the premise that migrating VMs into fewer number of Physical Machines (PMs) can achieve both optimization objectives, increasing the utilization of Cloud servers while concomitantly reducing the energy consumption of the Cloud data center. However, packing more VMs into a single server may lead to poor Quality of Service (QoS), since VMs share the underlying physical resources of the PM. To address this, VM Consolidation (VMC) algorithms are designed to dynamically select VMs for migration by considering the impact on QoS in addition to the above-mentioned optimization objectives. VMC is a NP-hard problem and hence, a wide range of heuristic and meta-heuristic VMC algorithms have been proposed that aim to achieve near-optimality. Since, VMC is highly popular research topic and plethora of researchers are presently working in this area, the related literature is extremely broad. Hence, it is a non-trivial research work to cover such extensive literature and find strong distinguishing aspects based on which VMC algorithms can be classified and critically compared, as it is missing in existing surveys. In this chapter, we have classified and critically reviewed VMC algorithms from multitude of viewpoints so that the readers can be truly benefitted. Finally, we have concluded with valuable future directions so that it would pave the way of fellow researchers to further contribute in this area.
- Research Article
3
- 10.1080/13614568.2013.834487
- Dec 1, 2013
- New Review of Hypermedia and Multimedia
With the rapid development of web technology and smart phone, multimedia contents spread all over the Internet. The prevalence of virtualization technology enables multimedia service providers to run media servers in virtualized servers or rented virtual machines (VMs) in a cloud environment. Although server consolidation using virtualization can substantially increase the efficient use of server resources, it introduces resources competition among VMs running different applications. Recently, hypervisors do not make any Quality of Service (QoS) guarantee for media-based applications if they are consolidated with other network-intensive applications, which leads to significant performance degradation. For example, Xen only offers a static method to allocate network bandwidth. In this paper, we find that the performance of media-based applications running in VMs degrades seriously when they are consolidated with other VMs running network-intensive applications and argues that dynamic network bandwidth allocation is essential to guarantee the QoS of media-based applications. Then, we present a dynamic network bandwidth allocation system in virtualized environment, which allocates network bandwidth dynamically and effectively, and does not interrupt running services in VMs. The experiments show that our system can not only guarantee the QoS of media-based applications well but also maximize the system's the overall performance while ensuring the QoS of media-based applications.
- Conference Article
10
- 10.1109/isorcw.2012.23
- Apr 1, 2012
As the foundation of cloud computing, Server consolidation allows multiple computer infrastructures running as virtual machines in a single physical node. It improves the utilization of most kinds of resource but memory under x86 architecture. Because of inaccurate memory usage estimate and the lack of memory resource management, there is much service performance degradation in data centers, even though they have occupied a large amount of memory. Furthermore, memory becomes insufficient for a physical server when a lot of virtual machines depend on it. In order to improve this, we present a dynamic memory scheduling system called DMSS, which can manage memory resources in server consolidation environments and allocate memory among virtual machines on demand. We have designed and implemented the corresponding memory scheduling policy based on Xen virtualization platform to enhance memory efficiency and achieve service level agreement. The benchmark shows that DMSS can make an accurate and rapid response to memory changes and save more than 30% physical memory with less than 5% performance degradation. DMSS actually brings in economic benefits to cloud service providers because more virtual machines can be accommodated at lower costs.
- Conference Article
8
- 10.1109/rtas.2018.00022
- Apr 1, 2018
Real-time virtualization is an emerging technology for embedded systems integration and latency-sensitive cloud applications. Earlier real-time virtualization platforms require offline configuration of the scheduling parameters of virtual machines (VMs) based on their worst-case workloads, but this static approach results in pessimistic resource allocation when the workloads in the VMs change dynamically. Here, we present Multi-Mode-Xen (M2-Xen), a real-time virtualization platform for dynamic real-time systems where VMs can operate in modes with different CPU resource requirements at run-time. M2-Xen has three salient capabilities: (1) dynamic allocation of CPU resources among VMs in response to their mode changes, (2) overload avoidance at both the VM and host levels during mode transitions, and (3) fast mode transitions between different modes. M2-Xen has been implemented within Xen 4.8 using the real-time deferrable server (RTDS) scheduler. Experimental results show that M2-Xen maintains real-time performance in different modes, avoids overload during mode changes, and performs fast mode transitions.
- Conference Article
3
- 10.1109/ucc.2014.48
- Dec 1, 2014
Cloud infrastructures are prone to various anomalies due to their ever-growing complexity and dynamics. Monitoring behavior of dynamic resource management systems is necessary to guarantee cloud reliability. In this paper, we present AMAD, a system designed for detecting an abusive use of dynamic virtual machine migration, in the case of the abusive virtual machine migration attack. This attack is performed by malicious manipulation of the amounts of resources consumed by Virtual Machines (VMs). AMAD identifies the VMs possibly at the origin of the attack by analyzing resource consumption profiles of the VMs to detect the fluctuating and highly correlated ones. We have implemented AMAD on top of the VMware ESXi platform and evaluated it both on our lab platform and under real cloud configurations. Our results show that AMAD pinpoints the attacking VMs which were intentionally injected in our experimentations, with high accuracy.
- Conference Article
13
- 10.1109/icccnt.2013.6726665
- Jul 1, 2013
The concept Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how business potential extends today. In recent advancements, Cloud computing applications are provided as services to end-users. The application services hosted under Cloud computing model have complex provisioning, configuration, and deployment requirements How to use Cloud computing resources efficiently and gain the maximum profits with efficient utilization of resources is one of the Cloud computing service providers' ultimate goals. Repetitive evaluation of the performance of Cloud provisioning policies, application workload models, and resources performance models in dynamic system are difficult to achieve and rather a time consuming and costly approach. To overcome this challenge, Cloud analyst simulator based on Cloud Sim has been proposed which enables the modeling and simulation in cloud's ambience. The objective of this paper is to prove that the choice of VM Scheduling Policy in Cloud computing model significantly improves the application performance under resource and service demand variations. We will discuss different Virtual Machine (VM) Scheduling Policies implemented and their performance analysis in Virtual environment of cloud computing in order to achieve better Quality of Service (QoS).
- Research Article
12
- 10.1016/j.jcss.2014.06.018
- Jul 5, 2014
- Journal of Computer and System Sciences
A novel memory allocation scheme for memory energy reduction in virtualization environment
- Research Article
8
- 10.1007/s11227-018-2508-1
- Aug 4, 2018
- The Journal of Supercomputing
Development of modern techniques, such as virtualization, underlies new solutions to the problem of reducing energy consumption in cloud computing. However, for the infrastructure as a service providers, it would be a difficult process to guarantee energy saving. Analysis of the workload of applications shows that the average utilization of virtual machines has many fluctuations; therefore, deciding about how to control such fluctuations in virtual machines plays a significant role in improving the energy consumption of datacenters. In this study, an adaptable model called virtual machine dynamic frequency system (VMDFS) has been developed whose its innovation is monitoring the average fluctuations of workloads to vary the CPU frequency of virtual machines at runtime, dynamically. In this model, enhanced exponential moving average method is used to predict workload fluctuations, and then after calculating a smoothing coefficient for the utilization fluctuations, the coefficient is used to control the CPU frequency (or computing power) of virtual machines. The proposed model was compared with several base line approaches such as DVFS using real datasets from CoMon project (PlanetLab). The results of experiments on VMDFS show that besides the reduced service-level agreement violation by up to 43.22%, the overall energy consumption is reduced by 40.16%. In addition, the overall runtime before a host shutdown increased by 17.44% in average, while the runtime before a virtual machine migration increased by 7.2%. This also shows an overall decrease in the number of migrations.
- Research Article
16
- 10.1109/tc.2016.2532865
- Nov 1, 2016
- IEEE Transactions on Computers
This paper presents the design and implementation of MemPipe, a dynamic shared memory management system for high performance network I/O among virtual machines (VMs) located on the same host. MemPipe delivers efficient inter-VM communication with three unique features. First, MemPipe employs an inter-VM shared memory pipe to enable high throughput data delivery for both TCP and UDP workloads among co-located VMs. Second, instead of static allocation of shared memories, MemPipe manages its shared memory pipes through a demand driven and proportional memory allocation mechanism, which can dynamically enlarge or shrink the shared memory pipes based on the demand of the workloads in each VM. Third but not the least, MemPipe employs a number of optimizations, such as time-window based streaming partitions and socket buffer redirection, to further optimize its performance. Extensive experiments show that MemPipe improves the throughput of conventional (native) inter VM communication by up to 45 times, reduces the latency by up to 62 percent, and achieves up to 91 percent shared memory utilization.
- Research Article
32
- 10.1016/j.scico.2009.04.001
- May 4, 2009
- Science of Computer Programming
Efficient virtual machine support of runtime structural reflection
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.