Articles published on Virtual machine
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
11170 Search results
Sort by Recency
- New
- Research Article
1
- 10.1016/j.compbiolchem.2025.108875
- Apr 1, 2026
- Computational biology and chemistry
- Vipra Ajay Parekh + 6 more
Identification of phosphodiesterase 10 A modulators for neurodegenerative and psychiatric disorders: Combination of physics-based virtual screening and machine learning approaches.
- Research Article
- 10.3390/electronics15051122
- Mar 9, 2026
- Electronics
- Xinlong Wu + 5 more
Data in multi-tenant cloud environments is increasingly shared across organizations, making strong in-memory isolation a critical requirement. Existing confidential computing mechanisms such as AMD SEV provide hardware-enforced protection, but they require specialized processors and incur non-trivial performance overhead, which limits their deployment in heterogeneous clouds. This paper presents DASPRI, a software-based approach that constructs an isolated execution environment for trusted virtual machines by combining dual address spaces with privilege restriction. DASPRI partitions physical memory into a normal region and an isolated region on NUMA systems, and steers all memory allocations of trusted VMs into the isolated region by monitoring page faults and kernel allocation paths. It further hardens the isolated region by mediating direct and dynamic kernel mappings and by maintaining separate page caches for trusted and normal VMs. Remote attestation is integrated to protect the integrity of metadata used to identify trusted VMs. We implement DASPRI on a HUAWEI Kunpeng AArch64 server running OpenEuler and evaluate it using microbenchmarks and UnixBench. Experimental results show that DASPRI enforces strong memory isolation with less than 5% overhead on basic system operations and only 1.3% degradation in overall host performance.
- Research Article
- 10.3390/electronics15051115
- Mar 8, 2026
- Electronics
- Mahmood Alharbi
The transition from conventional synchronous generators to inverter-based power systems has introduced significant challenges in stability, reliability, and protection coordination. Grid-forming inverters (GFMs) have emerged as a promising solution by emulating inertia and voltage regulation functions while enabling grid-supportive operation in weak or islanded networks. This study presents a structured qualitative review of the recent literature on GFM technologies. The selection process focused on control strategies, advanced semiconductor materials, protection frameworks, and cyber–physical security considerations. A thematic synthesis and comparative analysis were conducted to identify emerging trends and technical gaps. Among established approaches, virtual synchronous machine (VSM) and droop control remain widely adopted. More advanced strategies, including virtual oscillator control (VOC) and model predictive control (MPC), demonstrate improved dynamic performance in weak-grid conditions. Advances in semiconductor technologies, particularly Silicon Carbide (SiC) and Gallium Nitride (GaN), enable faster switching, higher efficiency, and enhanced thermal performance. The findings indicate a growing shift toward decentralized control architectures, fault-resilient converter topologies, and integrated protection–control co-design. Emerging solutions include grid-forming synchronization techniques that replace conventional phase-locked loop (PLL) structures, intrusion-tolerant inverter firmware with embedded anomaly detection, and predictive fault-clearing schemes tailored for low-inertia networks. Despite these advancements, several research gaps remain. These include limited large-scale validation of VOC and MPC strategies under high renewable penetration, insufficient interoperability metrics for legacy system integration, and a lack of standardized cybersecurity benchmarks across platforms. Future research should prioritize real-time experimental validation, robust protection co-design methodologies, and the development of regulatory and dynamic performance standards tailored to inverter-dominated grids. Strengthening protection coordination and interoperability frameworks will be essential to ensure the secure and stable deployment of GFMs in modern power systems.
- Research Article
- 10.1080/13614576.2026.2632000
- Mar 5, 2026
- New Review of Information Networking
- Himanshukamal Verma + 1 more
ABSTRACT The primary objective of this work is to present a novel hybrid optimization approach for container scheduling in cloud environments. Cloud architecture is based on a virtualized layer above Physical Machines (PMs) using Virtual Machines (VMs) and containers to promote scalability and flexibility. Containers, due to their lightweight nature relative to VMs, share the same host OS kernel, lowering overhead and speeding up startup time. With all dependencies accounted for, containers make deployment across environments easier, improve resource usage, and secure isolation applications. These characteristics make containers ideal for cloud systems, where quick deployment, resource efficiency, and scalability are critical. Here, a new efficient hybrid optimization algorithm, called Serial Exponential Sea-horse Optimizer (SE-SHO) is proposed for optimized container scheduling in cloud environment. The proposed SE-SHO is created by combining Exponential Weighted Moving Average (EWMA) and Sea-horse Optimizer (SHO) to make the convergence faster as well as to increase accuracy and robustness during complex scheduling tasks. Here, the objective function is generated on the basis of the load, resource utilization, energy consumption, and transmission cost. The experimental analysis stated that the proposed approach reached a migration cost of 0.183, a makespan of 0.423 and resource utilization of 0.482.
- Research Article
- 10.38124/ijisrt/26feb1029
- Mar 3, 2026
- International Journal of Innovative Science and Research Technology
- Williams Brobbey + 3 more
The performance of grid-forming (GFM) inverters in improving frequency stability in low-inertial power networks with high penetration of renewable energy sources (RES) is thoroughly quantitatively evaluated in the research. Time domain simulations are used to evaluate frequency stability parameters at 20%, 40%, 60%, and 80%% RES penetration levels using a modified IEEE39-bus benchmark system. In comparison to traditional grid-following (GFL) droop control, the study shows that virtual synchronous machine (VSM) control in GFM inverters lowers the peak rate – of -change-frequency (RoCoF) by about 40% at 80% penetration. Implementing GFM also shortens recovery times and improves frequency nadir by 0.5-1.0Hz. The study provides a crucial benchmark for system designers by determining an ideal virtual inertia constant of M=4 seconds through parametric sensitivity analysis. Additionally, the investigation demonstrates that hybrid GFM-battery energy storage system (BESS) designs offer improved resilience by combining prolonged energy support with rapid inertial response, which further improves recovery time and nadir. These quantitative results provide useful, data–driven recommendations for system planning and grid code development in network dominated by inverters
- Research Article
- 10.1007/s44443-026-00619-4
- Mar 3, 2026
- Journal of King Saud University Computer and Information Sciences
- Guanghao Yang + 3 more
A proactive virtual machine consolidation framework based on multi-dimensional workload awareness and deep reinforcement learning
- Research Article
- 10.1002/itl2.70247
- Mar 1, 2026
- Internet Technology Letters
- Jalawi Alshudukhi + 5 more
ABSTRACT Internet of Things (IoT) devices and cloud‐based applications has introduced critical challenges in resource management and load balancing within cloud‐assisted IoT environments. This paper presents an optimized strategy integrating supervised and unsupervised machine learning techniques, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and hybrid Lyrebird Falcon Optimization (HLFO), to enhance resource allocation and workload distribution across physical and virtual machines. The proposed system utilizes a multi‐objective optimization model based on key QoS metrics such as response time, availability, and throughput. Reinforcement learning further improves clustering decisions for real‐time adaptation. Simulation results demonstrate that the proposed method significantly outperforms existing models in reducing delay, minimizing packet loss, and improving throughput and packet delivery ratio, proving its effectiveness and scalability in cloud‐assisted IoT networks.
- Research Article
- 10.1049/icp.2025.4411
- Mar 1, 2026
- IET Conference Proceedings
- Oscar Escamilla Rincon + 3 more
Discrete implementation and comparative analysis of the virtual synchronous machine power loop
- Research Article
- 10.22214/ijraset.2026.77521
- Feb 28, 2026
- International Journal for Research in Applied Science and Engineering Technology
- V T Ram Pavan Kumar
Cloud data centers are essential for delivering scalable computing and storage services, yet challenges such as inefficient resource utilization, workload imbalance, and high energy consumption continue to impact performance and operational costs. This paper proposes a dynamic virtualization technique to optimize data server utilization in cloud environments by integrating real-time workload monitoring, adaptive virtual machine (VM) allocation, intelligent resource scheduling, and dynamic VM migration. The framework incorporates auto-scaling mechanisms to manage workload fluctuations, reducing idle resources while preventing server overload, and includes a predictive workload analysis component to forecast demand and allocate resources proactively. The proposed system is evaluated using performance metrics such as CPU utilization, memory efficiency, response time, throughput, and energy consumption. Experimental results demonstrate that the dynamic virtualization approach significantly improves server utilization, reduces power consumption, and enhances overall system performance compared to traditional static resource allocation methods, thereby supporting scalable, cost-effective, and sustainable cloud data center management
- Research Article
- 10.55041/ijsrem56979
- Feb 27, 2026
- International Journal of Scientific Research in Engineering and Management
- Prof Pawan Panchole + 1 more
Abstract— Cloud computing has become the backbone of modern digital services, supporting applications that demand high availability, scalability, and performance. As cloud infrastructures grow in complexity, accurately estimating cloud performance metrics—such as response time, throughput, latency, resource utilization, and availability—has become a critical challenge. Traditional analytical and rule-based models often struggle to capture the dynamic, non-linear behavior of cloud environments. In this context, deep learning (DL) has emerged as a powerful data-driven approach for modeling and predicting cloud performance with higher accuracy and adaptability. A cloud environment is inherently dynamic due to factors such as fluctuating workloads, heterogeneous virtual machines, multi-tenancy, and varying network conditions. Performance metrics are influenced by complex interactions between compute, storage, and network resources. This paper presents a deep learning model for estimating cloud performance metrics. It can be observed that the proposed work attains improved performance compared to existing work in the domain. Keywords— Cloud Computing, Performance Estimation, service-level agreements (SLAs). Regression Learning, Deep Learning, Forecasting Accuracy.
- Research Article
- 10.1080/17445760.2026.2616774
- Feb 26, 2026
- International Journal of Parallel, Emergent and Distributed Systems
- H M Kabamba + 2 more
Performance analysis in single-threaded, virtual-machine-based event-driven systems remains difficult due to abstraction layers separating applications, runtimes, and the operating system. Conventional profilers are effective for deterministic, multi-threaded systems but cannot capture the complex asynchronous interactions in environments such as Python, Node.js, Deno, or Lua. We introduce a runtime-level instrumentation technique that operates inside the virtual machine to capture event identifiers and contextual data, enabling post-hoc reconstruction across kernel, runtime, and user-space layers. Our Node Compass prototype demonstrates this approach, producing fine-grained, multilayer traces with minimal overhead and enabling precise bottleneck detection, thereby advancing observability in asynchronous runtimes.
- Research Article
- 10.1038/s41597-026-06765-8
- Feb 26, 2026
- Scientific data
- Dávid D Kovács + 5 more
The Copernicus Data Space Ecosystem is the official data platform for the Copernicus Programme's satellites. CDSE combines instant access to satellite imagery with Application Programming Interfaces and virtual machine processing. Instead of downloading satellite imagery for local computation, CDSE utilizes cloud-optimized files to provide data according to the filtering and processing request of the user, facilitating large-scale scientific analysis. Cloud computing on CDSE eliminates the need for users to rely on their own data infrastructure. The incorporated standards support both Open Science and commercialization of scientific tools and algorithms. CDSE serves all users from beginners to professionals, from the interactive visualization of imagery to custom ML algorithms. Acquiring the skills required to process Earth Observation data is facilitated by the open-source codebase and tutorials. Access to public cloud processing is expected to foster the uptake of Earth Observation across new domains. CDSE now provides the critical mass to serve as a tool for knowledge exchange and to influence commercial and public providers alike to support cloud processing.
- Research Article
- 10.55041/ijsrem56868
- Feb 24, 2026
- International Journal of Scientific Research in Engineering and Management
- Ved Prakash + 1 more
Abstract—Virtual Synchronous Machines (VSMs) emulate the inertial and damping behavior of conventional synchronous generators in inverter-based power systems. Conventional VSM implementations use fixed values of inertia constant (J) and damping coefficient (D), which lead to sub-optimal performance under dynamic grid conditions. This paper proposes an intelligent adaptive tuning framework for J and D using Deep Reinforcement Learning (DRL), employing Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) algorithms, validated via a MATLAB/Simulink–Python co-simulation environment. Simulation results across five disturbance scenarios demonstrate that the SAC-based controller achieves up to 48.6% reduction in maximum frequency deviation, 41.2% improvement in settling time, and 46.8% reduction in ROCOF compared to fixed-parameter VSM. The complete MATLAB and Python implementation details and simulation graphs are presented, confirming the viability for next-generation grid-forming inverter control. Index Terms—Virtual Synchronous Machine, Deep Reinforcement Learning, PPO, SAC, Adaptive Control, Inertia Emulation, Frequency Regulation, MATLAB/Simulink, Python, Grid-Forming Inverter.
- Research Article
- 10.55041/ijsrem56820
- Feb 23, 2026
- International Journal of Scientific Research in Engineering and Management
- Dr Saurabh V Kumar + 1 more
Abstract—The rapid proliferation of inverter-based renewable generation has introduced significant challenges to power system frequency stability due to declining physical inertia. Virtual Synchronous Machine (VSM) technology has emerged as a promising solution by emulating synchronous generator dynamics in grid-connected inverters. However, a systematic review of existing literature reveals that all major VSM implementations employ fixed values of virtual inertia (J) and damping coefficient (D), which are inadequate for handling the wide range of dynamic disturbances encountered in modern power systems. Rule-based adaptive methods proposed in recent works are limited by their reliance on heuristic logic and inability to generalize across diverse grid conditions. This paper presents a comprehensive literature review of seven key publications spanning VSM modeling, deep reinforcement learning (DRL) algorithms, and adaptive control frameworks. Through structured comparative analysis across four evaluation tables, we identify six critical research gaps in existing work and demonstrate that the proposed DRL-based adaptive VSM tuning framework—employing Proximal Policy Optimization (PPO) and Soft Actor-Critic (SAC) algorithms in a MATLAB/Simulink–Python co-simulation—addresses all identified gaps. The SAC-based controller achieves up to 48.6% reduction in maximum frequency deviation, 41.2% improvement in settling time, and 46.8% reduction in Rate of Change of Frequency (ROCOF) compared to the best existing fixed-parameter VSM, validating the proposed approach as a significant advancement over the current state of the art. Index Terms—Virtual Synchronous Machine (VSM), Deep Reinforcement Learning, Proximal Policy Optimization (PPO), Soft Actor-Critic (SAC), Adaptive Inertia, Damping Control, Literature Review, MATLAB/Simulink, Python, Grid-Forming Inverter, Frequency Regulation, ROCOF.
- Research Article
- 10.3390/jcp6010039
- Feb 22, 2026
- Journal of Cybersecurity and Privacy
- Kenan Sansal Nuray + 2 more
This paper presents an experimental comparison of the EMBA firmware security analysis framework deployed in cloud-based and standalone environments. Unlike prior studies that primarily focus on EMBA’s analytical capabilities, this work examines how deployment choices influence performance and execution time during IoT firmware analysis. Using identical EMBA configurations and analysis modules, firmware images of varying sizes were analyzed on a standalone personal computer and a Microsoft Azure cloud-based virtual machine. Execution time, detected vulnerabilities, and resource utilization were systematically recorded to evaluate the impact of the deployment environment. The results indicate that scan duration is affected by both firmware size and execution context. For example, using EMBA v1.5.0, a 25.5 MB firmware image required approximately 14 h on a standalone system and over 25 h in the cloud. In contrast, a 30.2 MB image was completed in approximately 18 h locally and 17 h in the cloud. Despite these differences in execution time, the type and number of identified vulnerabilities were largely consistent across both environments, suggesting comparable analytical coverage. Overall, this deployment-focused evaluation provides empirical insight into performance-related trade-offs relevant to practitioners selecting local or cloud-based environments for firmware security analysis.
- Research Article
- 10.34190/iccws.21.1.4456
- Feb 19, 2026
- International Conference on Cyber Warfare and Security
- Abigail Cliche + 4 more
Cross-domain systems have traditionally employed virtualization to isolate security domains, providing communication though standard TCP/IP networking stacks coupled with access permissions and credentials to enforce isolation. This deep and complex chain of trust typically depends upon a hardware base, such as a Trusted Platform Module (TPM) chip combined with a secure bootstrapping process. This paper describes a novel and high-performance alternative, Secure Transfer Link (STL), leveraging the unique architectural characteristics of the AMD UltraScale Multi-Processor System-on-Chip (MPSoC) device family: CPU affinity, an on-chip field programmable gate array (FPGA), and bus-mastering. These architectural characteristics make it possible to construct a secure data transfer path within the FPGA that can control which virtual machines may access and transfer data, enforcing isolation. The abstraction can be extended to include deep packet inspection and validation, such as parsing that checks adherence to the JavaScript Object Notation (JSON) protocol. Validation is achieved through the combination of formal grammars with a pushdown automata (PDA) parser and automatic transformation into an FPGA hardware configuration, resulting in a formally verifiable and hardened intellectual property (IP) called the Data Validator. The Secure Transfer Link is constructed by combining this Data Validator with another IP, the Memory Guard, which enforces access controls. These hardware Ips, together, comprise a system which prevents malicious software resident on a processor from undermining access policies or transferring malicious data. The presented IPs are performant. Their throughput improvement over traditional UDP/IP networking stacks is dramatic: speedups of up to 7x for tactical length messages and up to 4x for larger messages. The IPs are created using High-Level Synthesis (HLS), making it possible to formally specify a broad range of alternative policy and enforcement options then automatically include them in the Secure Transfer Link, constituting a novel isolation enforcement solution that is higher throughput than state of the art alternatives and enables selective domain access contingent upon formal verification of data.
- Research Article
- 10.1785/0220250337
- Feb 13, 2026
- Seismological Research Letters
- Shicheng Wang + 7 more
Abstract The current earthquake early warning system (EEWS) deployed by the China Earthquake Network Center processes data from more than 18,600 stations, handling approximately 110,000 packets per second in real time. However, the system is nearing its processing capacity limit and encountering bottlenecks. To address these limitations, we have developed and redesigned an earthquake early warning prototype system (EEWPS) by integrating existing earthquake early warning algorithms with the Flink distributed real-time big data processing engine. The system was tested using a dataset comprising 182 seismic events of magnitude M 3.0 or higher recorded in mainland China from 2008 to 2023. The EEWPS successfully processed 171 of these events, with 11 missed events and no false alarms. Notably, the time from the first station trigger to the first-warning report was 5.6 ± 3.7 s, whereas the time from the origin to the first report was 9.8 ± 5.5 s. The system demonstrated an epicenter deviation of 7.4 ± 9.6 km, a magnitude deviation of −0.12 ± 0.82, and a focal depth deviation of 1.3 ± 8.0 km. The results of the four representative test events further illustrate the system’s performance in practical scenarios. In the initial early warning reports, epicenter location errors were generally constrained within 10 km, whereas magnitude errors were maintained within 0.8 units. For stress testing, the EEWPS was deployed across four virtual machines, with a combination of event waveforms and simulated data. The system successfully processed data from approximately 31,000 stations, achieving an average throughput of over 186,700 packets per second. This performance indicates that the Flink-based EEWPS not only addresses the current processing bottlenecks of the existing EEWS but also offers a potential approach for the next generation of intelligent EEWS, capable of handling large-scale data and providing timely, accurate alerts in operational environments.
- Research Article
- 10.1145/3777419
- Feb 13, 2026
- ACM Transactions on Parallel Computing
- Liubov Evseeva + 4 more
In this study, approaches to the development of interactive Java algorithms intended for dynamic visualization of parallel computational threads were considered. The proposed interactive Java algorithms give an opportunity to create visual graphical representations of parallel processes, their interactions, and data distribution. Within the framework of the research, the key approaches to visualization of parallel computational flows are analyzed, the peculiarities of the applied interactive components are estimated and the methods of integration with existing monitoring and debugging systems are considered. With the help of configurable visualization tools, developers and researchers can observe the evolution of computational threads, evaluate the performance of systems and timely react to changes in the structure of parallel tasks. Implementing algorithms on the Java platform ensures portability, broad applicability, and integration with existing frameworks for high-performance computing. The use of dynamic data structures, thread-safe collections, and parallelism mechanisms implemented in the language and standard libraries allows efficiently processing large amounts of data in real time. In addition, Java Virtual Machine provides profiling tools that can be directly applied to optimize the visualized processes.
- Research Article
- 10.59256/ijire.20260701012
- Feb 12, 2026
- International Journal of Innovative Research in Engineering
- Zaw Zaw Htwe + 1 more
Cloud computing provides on-demand computing power over the internet, removing the need for physical infrastructure management. Cloud abstraction allows users to manipulate virtual machines as objects, enabling the process of integrating, launching, and running applications in a way that results in a smooth, uniform, and continuous manner. Cloud abstraction conceals technical details and offers smart options, which improve compatibility, adaptability, and scalability. As a result, cloud platforms have become an attractive option for both research and industrial sectors. With the aid of automated resource optimization layers with effective setups, complex and difficult systems can now be built and integrated with minimal effort. However, this ease of use comes with its own hidden drawbacks. By packaging system details away and relying on non-transparent automated services, we can lose sight of critical connections, and we sacrifice the oversight needed to ensure efficiency and reliability. Abstraction layers can make infrastructure fragile, leading to difficult-to-trace errors or failures and dangerous dependence on single providers. This research explores how ease of use compromises system resilience, proposing principles for more sustainable and user-friendly robust durability.
- Research Article
- 10.1038/s41598-026-34998-5
- Feb 9, 2026
- Scientific reports
- Fan Sun + 2 more
The virtual synchronous compensator (VSCOM) integrates virtual synchronous machine control within a static var generator (SVG), providing active voltage support and improving the adaptability of SVGs to weak grid conditions. However, the interaction between VSCOM, which adopts grid-forming control, and renewable energy grid-connected converters (REGC) based on grid-following control introduces complex transient stability characteristics. This study investigates the effect of VSCOM on the transient synchronous stability of REGC under large grid disturbances. First, a constant voltage current-limiting control strategy for VSCOM is proposed based on its operational characteristics. A mathematical model is then established to assess the enhancement of the static stability limit of REGC by VSCOM. Subsequently, a transient model of the coupled VSCOM-REGC system is developed, considering the short-circuit ratio (SCR), control parameters, and reactive power capacity, to clarify the mechanism by which VSCOM affects the transient synchronous stability of REGC. Finally, an electromagnetic transient simulation model is built using MATLAB/Simulink to verify the theoretical analysis.