Efficient virtual machine support of runtime structural reflection
Efficient virtual machine support of runtime structural reflection
- Research Article
12
- 10.1016/j.jss.2012.08.016
- Aug 29, 2012
- Journal of Systems and Software
Efficient support of dynamic inheritance for class- and prototype-based languages
- Research Article
15
- 10.1016/j.infsof.2013.09.002
- Sep 17, 2013
- Information and Software Technology
A hybrid class- and prototype-based object model to support language-neutral structural intercession
- Research Article
1
- 10.1016/j.infsof.2018.03.012
- Apr 3, 2018
- Information and Software Technology
Efficient runtime aspect weaving for Java applications
- Research Article
3
- 10.1145/3276479
- Oct 24, 2018
- Proceedings of the ACM on Programming Languages
To leverage the benefits of modern hardware, dynamic languages must support parallelism, and parallelism requires a virtual machine (VM) capable of parallel execution — a parallel VM. However, unrestricted concurrency and the dynamism of dynamic languages pose great challenges to the implementation of parallel VMs. In a dynamic language, a program changing itself is part of the language model. To help the VM, languages often choose memory models (MM) that weaken consistency guarantees. With lesser guarantees, local program state cannot be affected by every concurrent state change. And less interference allows a VM to make local assumptions about the program state which are not immediately violated. These local assumptions are essential for a VM’s just-in-time compiler for delivering state-of-the-art VM performance. Unfortunately, some dynamic languages employ MMs that give exceedingly strong consistency guarantees and thereby hinder the development of parallel VMs. Such is the case in particular for languages that depend on a global interpreter lock, which mandates a MM with sequential consistency and instruction atomicity. In this paper, we reflect on a first implementation of the Parallel RPython execution model, which facilitates the development of parallel VMs by decoupling language semantics from the synchronization mechanism used within the VM. The implementation addresses the challenges imposed by strong MMs through strict isolation of concurrent computations. This isolation builds on transactional parallel worlds, which are implemented with a novel combination of software techniques and the capabilities of modern hardware. We evaluate a set of parallel Python programs on a parallel VM that relies on Parallel RPython’s implementation. Compared with a serial baseline VM that relies on a global interpreter lock, the parallel VM achieves speedups of up to 7.5× on 8 CPU cores. The evaluation shows that our realization of Parallel RPython meets the challenges of dynamic languages, and that it can serve as a solid foundation for the construction of parallel dynamic language VMs.
- Conference Article
10
- 10.1145/1289881.1289920
- Sep 30, 2007
Embedded platforms are resource-constrained systems in whichperformance and memory requirements of executed code are ofcritical importance. However, standard techniques such as full just-in-time(JIT) compilation and/or adaptive optimization (AO) may not be appropriate for this type of systems due to memory and compilation overheads.The research presented in this paper proposes a technique that combines some of the main benefits of JIT compilation, superoperators(SOs) and profile-guided optimization, in order to deliver a lightweight Java bytecode compilation system, targeted for resource-constrained environments, that achieves runtime performance similar to that of state-of-the-art JIT/AO systems, while having a minimal impact on runtime memory consumption.The key ideas are to use profiler-selected, extended bytecode basic blocks as superoperators (new bytecode instructions) and to perform few, but very targeted, JIT/AO-like optimizations at compile time only on the superoperators. bytecode, as directed by compilation .hints. encoded as annotations. As such, our system achieves competitive performance to a JIT/AO system, but witha much lower impact on runtime memory consumption. Moreover,it is shown that our proposed system can further improveprogram performance by selectively inlining method calls embedded in the chosen superoperators, as directed by runtime profiling data and with minimal impact on classfile size.For experimental evaluation, we developed three Virtual Machines(VMs) that employ the ideas presented above. The customized VMs are first compared (w.r.t. runtime performance) to a simple, fast-to-develop VM (baseline) and then to a VM that employs JIT/AO. Our best-performing system attains speedups ranging from a factor of 1.52 to a factor of 3.07, w.r.t. to the baseline VM. When compared to a state-of-the-art JIT/AO VM, our proposed system performs better for three of the benchmarks and worse by less than a factor of 2 for three others. But our SO-extended VM outperforms the JIT/AO system by a factor of 16, on average, w.r.t. runtime memory consumption.
- Conference Article
2
- 10.1145/2162049.2162073
- Mar 25, 2012
Modularity is a key concept for large and complex applications and an important enabler for collaborative research. In comparison, virtual machines (VMs) are still mostly monolithic pieces of software. Our goal is to significantly reduce to the cost of extending VMs to efficiently host and execute multiple, dynamic languages. We are designing and implementing a VM following the "everything is extensible" paradigm. Among the novel use cases that will be enabled by our research are: VM extensions by third parties, support for multiple languages inside one VM, and a universal VM for mobile devices.
- Conference Article
28
- 10.1109/spis.2015.7422321
- Dec 1, 2015
Virtual machine (VM) scheduling with load balancing in cloud computing aims to allocate VMs to suitable physical machines (PM) and balance the resource usage among all of the PMs. Correct scheduling of cloud hosts is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. In this regard the use of dynamic forecast of resource usage in each PM can improve the VM scheduling problem. This paper combines ant colony optimization (ACO) and VM dynamic forecast scheduling (VM_DFS), called virtual machine dynamic prediction scheduling via ant colony optimization (VMDPS-ACO), to solve the VM scheduling problem. In this algorithm through analysis of historical memory consumption in each PM, future memory consumption forecast of VMs on that PM and the efficient allocation of VMs on the cloud infrastructure is performed. We experimented the proposed algorithm using Matlab. The performance of the proposed algorithm is compared with VM_DFS [1]. VM_DFS algorithm exploits first fit decreasing (FFD) scheme using corresponding types (i.e. queuing the list of VMs increasingly, decreasingly or randomly) to schedule VMs and assign them to suitable PMs. We experimented the proposed algorithm in both homogeneous and heterogeneous mode. The results indicate, VMDPS-ACO produces lower resource wastage than VM_DFS in both homogenous and heterogeneous modes and better load balancing among PMs.
- Conference Article
3
- 10.1109/icaetr.2014.7012890
- Aug 1, 2014
There is exponentially increasing demand of data generation, its storage, access and communication. To fulfil the demands, concept called Cloud Computing came into the picture. The key concept operating at the basic level of cloud computing stack is a Virtualization. Virtual machine (VM) state is represented as a virtual disk file (image) that is created on the hypervisor's local file system, from where virtual machine is booted up. Virtual machine requires minimum one disk to boot and start its function. Within guest operating system, one can use block devices or files as virtual disks with Kernel-based Virtual Machine (KVM). Till the time, no empirical study has been performed on different types of virtual disk image formats to quantify their runtime performance. We have studied representative application workload: I/O micro-benchmarks on a local file system i.e. direct-attached storage (DAS) environment in conjunction with RAW, Copy-on-Write scheme QCOW2 from QEMU, Microsoft's VHD, Virtualbox's VDI, VMWARE's VMDK and parallel's HDD. We have also investigated the impact of block size on applications runtime performance. This paper seeks to provide the detailed runtime performance analysis of different image formats based on different parameters such as latency, bandwidth, IOs performed per second (IOPS). Today users have a choice to select virtual disks from the pool of virtual disk image formats. But, currently it's a black box selection for users as no comparison or decision model exist for different virtual disk image formats. This study is done to provide insights into the performance aspect of various virtual disk image formats and offer guidelines to virtual disk end users in implementing and using them respectively.
- Conference Article
6
- 10.1145/1711506.1711507
- Oct 25, 2009
This paper describes how virtual classes can be supported in a vir-tual machine. Main-stream virtual machines such as the Java Vir-tual Machine and the.NET platform dominate the world today, and many languages are being executed on these virtual machines even though their embodied design choices conflict with the design choices of the virtual machine. For instance, there is a non-trivial mismatch between the main-stream virtual machines mentioned above and dynamically typed languages. One language concept that creates an even greater mismatch is virtual classes, in particular be-cause fully general support for virtual classes requires generation of new classes at run-time by mixin composition. Languages like CaesarJ and Object Teams can express virtual classes restricted to the subset that does not require run-time generation of classes, be-cause of the restrictions imposed by the Java Virtual Machine. We have chosen to support virtual classes by implementing a special-ized virtual machine, and this paper describes how this virtual ma-chine supports virtual classes with full generality.
- Book Chapter
8
- 10.1007/978-3-642-37832-4_4
- Jul 24, 2013
Live virtual machine (VM) migration is a technique for transferring an active VM from one physical host to another without disrupting the VM. This technique has been proposed to reduce the downtime for migrated overload VMs. As VMs migration takes much more times and cost in comparison with tasks migration, this study develops a novel approach to confront with the problem of overload VM and achieving system load balancing, by assigning the arrival task to another similar VM in a cloud environment. In addition, we propose a multi-objective optimization model to migrate these tasks to a new VM host applying multi-objective genetic algorithm (MOGA). In the proposed approach, there is no need to pause VM during migration time. In addition, as contrast to tasks migration, VM live migration takes longer to complete and needs more idle capacity in host physical machine (PM), the proposed approach will significantly reduce time, downtime memory, and cost consumption.
- Research Article
3
- 10.1145/2674025.2576209
- Mar 1, 2014
- ACM SIGPLAN Notices
We are interested in implementing dynamic language runtimes on top of language-level virtual machines. Type specialization is a critical optimization for dynamic language runtimes: generic code that handles any type of data is replaced with specialized code for particular types observed during execution. However, types can change, and the runtime must recover whenever unexpected types are encountered. The state-of-the-art recovery mechanism is called deoptimization. Deoptimization is a well-known technique for dynamic language runtimes implemented in low-level languages like C. However, no dynamic language runtime implemented on top of a virtual machine such as the Common Language Runtime (CLR) or the Java Virtual Machine (JVM) uses deoptimization, because the implementation thereof used in low-level languages is not possible. In this paper we propose a novel technique that enables deoptimization for dynamic language runtimes implemented on top of typed, stack-based virtual machines. Our technique does not require any changes to the underlying virtual machine. We implement our proposed technique in a JavaScript language implementation, MCJS, running on top of the Mono runtime (CLR). We evaluate our implementation against the current state-of-the-art recovery mechanism for virtual machine-based runtimes, as implemented both in MCJS and in IronJS. We show that deoptimization provides significant performance benefits, even for runtimes running on top of a virtual machine.
- Conference Article
7
- 10.1145/2576195.2576209
- Mar 1, 2014
We are interested in implementing dynamic language runtimes on top of language-level virtual machines. Type specialization is a critical optimization for dynamic language runtimes: generic code that handles any type of data is replaced with specialized code for particular types observed during execution. However, types can change, and the runtime must recover whenever unexpected types are encountered. The state-of-the-art recovery mechanism is called deoptimization. Deoptimization is a well-known technique for dynamic language runtimes implemented in low-level languages like C. However, no dynamic language runtime implemented on top of a virtual machine such as the Common Language Runtime (CLR) or the Java Virtual Machine (JVM) uses deoptimization, because the implementation thereof used in low-level languages is not possible.In this paper we propose a novel technique that enables deoptimization for dynamic language runtimes implemented on top of typed, stack-based virtual machines. Our technique does not require any changes to the underlying virtual machine. We implement our proposed technique in a JavaScript language implementation, MCJS, running on top of the Mono runtime (CLR). We evaluate our implementation against the current state-of-the-art recovery mechanism for virtual machine-based runtimes, as implemented both in MCJS and in IronJS. We show that deoptimization provides significant performance benefits, even for runtimes running on top of a virtual machine.
- Research Article
1
- 10.18280/isi.260610
- Dec 27, 2021
- Ingénierie des systèmes d information
In this paper, high efficient Virtual Machine (VM) migration using GSO algorithm for cloud computing is proposed. This algorithm contains 3 phases: (i) VM selection, (ii) optimum number of VMs selection, (iii) VM placement. In VM selection phase, VMs to be migrated are selected based on their resource utilization and fault probability. In phase-2, optimum number of VMs to be migrated are determined based on the total power consumption. In VM placement phase, Glowworm Swarm Optimization (GSO) is used for finding the target VMs to place the migrated VMs. The fitness function is derived in terms of distance between the main server and the other server, VM capacity and memory size. Then the VMs with best fitness functions are selected as target VMs for placing the migrated VMs. The proposed algorithms are implemented in Cloudsim and performance results show that PEVM-GSO algorithm attains reduced power consumption and response delay with improved CPU utilization.
- Dissertation
- 10.4225/03/58b911cef2af4
- Mar 3, 2017
Cloud Computing has recently emerged as a highly successful alternative information technology paradigm through on-demand resource provisioning and almost perfect reliability. In order to meet the customer demands, Cloud providers are deploying large-scale virtualized data centers consisting of thousands of servers across the world. These data centers require huge amount of electrical energy that incur very high operating cost and as a result, leave large carbon footprints. The reason behind the extremely high energy consumption is not just the amount of computing resources used, but also lies in inefficient use of these resources. Furthermore, with the recent proliferation of communication-intensive applications, network resource demands are becoming one of the key areas of performance bottleneck. As a consequence, efficient utilization of data center resources and minimization of energy consumption are emerging as critical factors for the success of Cloud Computing. This thesis addresses the above mentioned resource and energy related issues by tackling through data center-level resource management, in particular, by efficient Virtual Machine (VM) placement and consolidation strategies. The problem of high resource wastage and energy consumption is dealt with an online consolidated VM cluster placement scheme, utilizing the Ant Colony Optimization (ACO) metaheuristic and a vector algebra-based multi-dimensional resource utilization model. In addition, optimization of network resource utilization is addressed by an online network-aware VM cluster placement strategy in order to localize data traffic among communicating VMs and reduce traffic load in data center interconnects that, in turn, reduces communication overhead in the upper layer network switches. Besides the online placement schemes that optimize the VM placement during the initial VM deployment phase, an offline decentralized dynamic VM consolidation framework and an associated algorithm leveraging VM live migration technique are presented to further optimize the run-time resource usage and energy consumption, along with migration overhead minimization. Such migration-aware dynamic VM consolidation strategy uses realistic VM migration parameters to estimate impacts of necessary VM migrations on data center and hosted applications. Simulation-based performance evaluation using representative workloads demonstrates that the proposed VM placement and consolidation strategies are capable of outperforming the state-of-the-art techniques, in the context of large data centers, by reducing energy consumption up to 29%, server resource wastage up to 85%, and network load up to 60%.
- Research Article
1
- 10.1016/j.compeleceng.2014.04.017
- May 14, 2014
- Computers and Electrical Engineering
Feedback control for multi-resource usage of virtualised database server
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.