Leveraging Virtualization Technology for Command and Control Systems Training
Abstract : The North American Aerospace Defense Command (NORAD) and United States Northern Command (USNORTHCOM) (N-NC) crew training has been hindered by an inability to conduct dynamic training and exercises on multiple Command and Control (C2) systems. There was no common simulation injector because of stove-piped acquisitions and legacy interfaces that were incompatible with the Live-Virtual-Constructive Toolkits. The N-NC Joint Training and Exercise Directorate could not afford traditional replication or emulation of all C2 systems and their data sources, nor the inevitable sustainment costs. This paper presents a cost-effective solution to provide dynamic scenario injection into multiple C2 systems: leveraging server and desktop virtualization technology described in previous I/ITSEC papers. The virtualization process transforms stand-alone systems into functionally equivalent virtual machines (VMs). Server virtualization technology lets multiple VMs run as guests on a single host, and a host can support VMs running different operating systems. This allows entire processing strings, distributed throughout North America, to be converted into VMs on a single server. Because the VMs inherit the fidelity of the actual processors, their outputs are as authentic as the operational systems. These VMs feed processed simulation event data into actual C2 systems or equivalent VMs. Future operational system upgrades can be virtualized and then replace existing VMs without changing this infrastructure. Desktop virtualization technology allows users to run multiple VMs in separate windows on a common display. NNC exploited desktop virtualization to simplify the trainee and model operator's workspaces. They can view and manage multiple VMs with one monitor, keyboard and mouse (controlling simulations, lower echelon processing and operator interaction, and viewing C2 workstation displays).
- Book Chapter
- 10.1007/978-981-33-4299-6_30
- Jan 1, 2021
Server virtualization technology is an automation method to control and monitor task running in multiple virtual machines. It uses software to divide a physical server into multiple virtual machines. Each virtual machine runs its own operating system. It increases the effective use of server. The VMware ESXi operating system can be personalized with required designs, updates, and patches. This study focuses on personalization of the ESXi operating system with the customized application stack. The bare metal compute hardware boots up directly into this application stack and is ready to host virtual machines on it. The HPE Image Streamer is used to host, configure, and serve the operating systems to HPE Synergy compute modules.
- Conference Article
- 10.2991/ccis-13.2013.54
- Jan 1, 2013
Virtual desktop technology separates the users and the resources, contributing to terminal security solutions and improvement of resource utilization. It also provides the convenience for the centralized management of resources. But the introduction of virtualization technology also makes unique safety risks existing in virtual desktop. Identity authentication is the key technology to solve the problem of virtual desktop security problems and also is the foundation of more complex security protective measures. This article first describes the principle of the Combined Public Key (CPK) cryptosystems, then according to the characteristics of the virtual desktop, two authentication methods based on CPK are proposed for virtual resources applying and virtual resources using respectively. And the user and the virtual machine is bound through the federated identity in order to prevent fraudulent use of virtual machine,. At last, the safety and performance analysis of the proposed authentication method is given. Virtual desktop which can completely separate the user and data is convenient for the centralized management to user's system, application and data. With increased resource utilization rate, enhanced the continuity of business, reduce the pressure of terminal security risks, and many other advantages, so it is widely used in recent years. At the same time, its unique security risks also have gradually received attention. Due to the virtual desktop based on virtualization technology, multiple virtual machines share hardware resources, and therefore need to provide the corresponding security solution for user data isolation, virtual machine protection and data storage, etc (1-3). Identity authentication which is one of the key technology to solve the virtual desktop security can ensure that users remote login and use their own virtual resources, manage user's data, at the same time, more complex and fine-grained protection measures can also be implemented the virtual desktop system based on identity authentication. According to the characteristics of the virtual desktop, two authentication methods based on CPK was proposed for virtual resources applying and virtual resources using respectively in this article. And for resists the risk of fraudulent using virtual machine effectively, a method of binding of the user ID and the virtual machine UIID through the federated identity was given. At last, the safety and performance analysis of the proposed authentication method is given.
- Research Article
81
- 10.1016/j.future.2018.12.035
- Dec 26, 2018
- Future Generation Computer Systems
Combining containers and virtual machines to enhance isolation and extend functionality on cloud computing
- Book Chapter
- 10.1016/b978-1-59749-582-0.00001-4
- Jan 1, 2010
- Citrix XenDesktop Implementation
Chapter 1 - Introduction
- Book Chapter
- 10.1016/b978-1-59749-305-5.00001-3
- Jan 1, 2009
- Virtualization for Security
Chapter 1 - An Introduction to Virtualization
- Conference Article
5
- 10.1109/cyberc.2016.83
- Oct 1, 2016
Virtualization technology has been widely adoptedin Cloud data centers for adaptive resource provisioning. Withvirtualization, multiple virtual machines (VMs) can be colocatedon a single physical host to yield maximum efficiency. However, VMs which show high CPU utilization correlationsto other co-located peers are more likely to trigger overloadingincidents. This work provides an analysis on effects ofcorrelation-based VM allocation criteria to Cloud data centers. The correlations among VMs' CPU utilizations are consideredas parameters for decision making in VM allocation processes. Three different expressions of correlation-based criteria areintroduced and evaluated in this work. According to oursimulation results obtained from CloudSim with real-worldworkload traces, Cloud data centers with correlation-based allocationcriteria can perform better in terms of reducing energyconsumption and avoid committing Service Level Agreementsviolations than those with power-based criteria.
- Conference Article
- 10.1109/iccitechnology.2013.6579533
- Jun 1, 2013
It is estimated that the ICT industry contains not less than one billion personal computers (PC), with power consumption ranging from 100 to 200 W per workstation. Virtualization and Thin Client technologies, mediated by broadband network links will have a very significant impact on efforts to reduce energy consumption and to enhance data security throughout the ICT industry. Besides, virtualization is quickly getting into the small and medium-sized business (SMB) market, promising a better (centralized) energy management with means of consolidation and resource reuse. Indeed, multiple virtual machines (VM) or platforms, running Virtual Desktops (VD), can be hosted on a single physical server, requiring only hypervisors and adequate desktop delivery protocols to deliver the VD service to distant users on any suitable device. In this paper, we propose to deliver a VD service over a federation of data centers (DC), sparsely connected to some nodes of the core network. The proposed model aims to provide an adequate resource dimensioning at the DC and the network while minimizing the inherent energy consumption. We prove by means of simulations the importance of user profiling and activity prediction in avoiding over-dimensioning, especially at the DCs. We also show the interest of energy-aware modes at the servers.
- Research Article
4
- 10.1016/j.future.2018.01.017
- Feb 2, 2018
- Future Generation Computer Systems
LayerMover: Fast virtual machine migration over WAN with three-layer image structure
- Conference Article
- 10.1109/hpcc.and.euc.2013.240
- Nov 1, 2013
Live migration of virtual machine (VM) enables mobility of VM and contributes to advantages of virtualization like energy saving, high availability, fault tolerance and work load balancing. However solutions of VMs' migration in both theoretical and industrial areas concentrate more on memory migration other than storage migration. Lots of applications with intensive disk I/O operations rely on local storage, especially when it comes to high performance computing. Migration of shared storage is also of necessity for consolidation and workload balance. Current approaches on storage migration can hardly work effectively in disk I/O intensive environment. They cannot reduce migration time and guarantee the disk I/O performance of VMs at the same time. This paper proposes an approach called Partners Assisted Storage Migration (PASM). We are the first to utilize disk I/O ability of pre-allocated storage nodes to relieve the competition between VMs' intensive disk I/O and storage migration. It can migrate VMs' storage effectively comparing to current methods: post-copy and write-mirror. Experiments including single VM's migration and multiple VMs' migration show that PASM can save 78.9% migration time and achieve additional 27.1% in disk I/O performance over existing methods.
- Conference Article
7
- 10.1109/infocom41043.2020.9155415
- Jul 1, 2020
Live migration is a key technique to transfer virtual machines (VMs) from one machine to another. Often multiple VMs need to be migrated in response to events such as server maintenance, load balancing, and impending failures. However, VM migration is a resource intensive operation that pressures the CPU, memory, and network resources of the source and destination hosts as well as intermediate network links. The live migration mechanism ends up contending for finite resources with the VMs that it needs to migrate, which prolongs the total migration time and worsens the performance of applications running inside the VMs. In this paper, we propose SOLive, a new approach to reduce resource contention between the migration process and the VMs being migrated. First, by considering the nature of VM workloads, SOLive manages the order in which multiple VMs are migrated to significantly reduce the total mi-gration time. Secondly, to reduce the network contention between the migration process and the VMs, SOLive uses a combination of software-defined networking-based resource reservation and scatter gather-based VM migration to quickly deprovision the source host. A prototype implementation of our approach in KVM/QEMU platform shows that SOLive quickly evicts VMs from the source host with low impact on VMs’ performance.
- Conference Article
2
- 10.2991/ameii-15.2015.36
- Jan 1, 2015
The virtual machine have memory bottleneck in the application, two mainly aspects are the I/O limit and dynamic virtual machine migtation.GlusterFS is a distributed file storage system with high level and open source uses the Scale-Out architecture and elastic hash algorithm to solve the I/O limit bottleneck.GlusterFS can automatically copy files and provides file sharing service to solve the virtual machine dynamic migration bottleneck.In order to use GlusterFS as the underlying storage devices in a cloud environment, the iozone file system is adopted to test the performance of GlusterFS.The results show that the storage performance of GlusterFS can be improved linear through increasing dynamically the number of physical servers, the speed is stable when multi clients write large files to GlusterFS at the same time, users can define his own data backup number.Therefore, it is a good choice to use GlusterFS to solve the storage bottleneck on virtual machines in a cloud environment.
- Conference Article
5
- 10.1109/icpads.2011.67
- Dec 1, 2011
Increasing Internet business and computing footprint motivate server consolidation in data centers. Through virtualization technology, server consolidation can reduce physical hosts and provide scalable services. However, the ineffective memory usage among multiple virtual machines (VMs) becomes the bottleneck in server consolidation environment. Because of inaccurate memory usage estimate and the lack of memory resource managements, there is much service performance degradation in data centers, even though they have occupied a large amount of memory. In order to improve this scenario, we first introduce VM's memory division view and VM's free memory division view. Based on them, we propose a hierarchal memory service mechanism. We have designed and implemented the corresponding memory scheduling algorithm to enhance memory efficiency and achieve service level agreement. The benchmark test results show that our implementation can save 30% physical memory with 1% to 5% performance degradation. Based on Xen virtualization platform and balloon driver technology, our works actually bring dramatic benefits to commercial cloud computing center which is providing more than 2,000 VMs' services to cloud computing users.
- Research Article
16
- 10.1088/1742-6596/219/5/052015
- Apr 1, 2010
- Journal of Physics: Conference Series
Virtualization technologies such as Xen can be used in order to satisfy the disparate and often incompatible system requirements of different user groups in shared-use computing facilities. This capability is particularly important for HEP applications, which often have restrictive requirements. The use of virtualization adds flexibility, however, it is essential that the virtualization technology place little overhead on the HEP application. We present an evaluation of the practicality of running HEP applications in multiple Virtual Machines (VMs) on a single multi-core Linux system. We use the benchmark suite used by the HEPiX CPU Benchmarking Working Group to give a quantitative evaluation relevant to the HEP community. Benchmarks are packaged inside VMs and then the VMs are booted onto a single multi-core system. Benchmarks are then simultaneously executed on each VM to simulate highly loaded VMs running HEP applications. These techniques are applied to a variety of multi-core CPU architectures and VM configurations.
- Research Article
12
- 10.1007/s11432-011-4273-0
- May 24, 2011
- Science China Information Sciences
Desktop virtualization is a very hot concept in both industry and academic communities. Since virtualized desktop system is based on multiple virtual machines (VM), it is necessary to design a distributed storage system to manage the VM images. In this paper, we design a distributed storage system, VMStore, by taking into account three important characteristics: high performance VM snapshot, booting optimization from multiple images and redundancy removal of images data. We adopt a direct index structure of blocks for VM snapshots to speed up VM booting performance significantly; provide a distribute storage structure with good bandwidth scalability by dynamically changing the number of storage nodes; and propose a data preprocessing strategy with intelligent object partitioning techniques, which would eliminate duplication more effectively. Performance analysis for VMStore focuses on two metrics: the speedup of VM booting and the overhead of de-duplication. Experimental results show the efficiency and effectiveness of VMStore.
- Conference Article
7
- 10.1109/ipdpsw.2013.167
- May 1, 2013
Virtual machine (VM) migration is affected by network latency and throughput, which are highly fluctuating and unpredictable in wide-area networks (WANs). Hence, it is difficult to statically minimize the time required to transfer a large number of VMs across WAN. The goal of this work is to migrate as many VMs as possible during a given period of time and it is motivated by disaster recovery scenarios. One approach is to migrate a large number of VMs in parallel, but this leads to long migration times of each individual VM. Long migration times are problematic in catastrophic circumstances where resources are limited and can fail within a short period of time. Thus, it is important to shorten both the total time required to migrate multiple VMs and the migration time of individual VMs. Due to network performance fluctuations, the optimal number of parallel migrations changes over time. This work proposes a feedback-based controller that adapts to the number of parallel VM migrations in response to changes in a WAN. The controller implements an algorithm inspired by the TCP congestion avoidance algorithm in order to regulate the number of VMs in transit depending on the network conditions. Experiments using a prototype controller confirm that it is possible to control the migration of a set of VMs shortening both the total and individual migration times. The experiments show that the controller shortens the individual migration time by up to 5.7 fold compared to that of the static VM migrations where the number of parallel migrations does not change until all migrations are completed. The contributions of this work are 1) introducing migration strategies for multiple VMs on WANs and 2) proposing a hypervisor-independent controller that adapts to network bandwidth fluctuations in disaster scenarios.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.