Experimental performance analysis of cloud resource allocation framework using spider monkey optimization algorithm

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

SummaryThe cloud services demand has increased exponentially in the last decade due to its plethora of services. It becomes a significant platform to compute large and diverse applications over the internet. On the contrary, on‐demand resource allocation to a variety of applications becomes a serious issue due to dynamic workload conditions and uncertainty in the cloud environment. Several existing state of art techniques often fails to allocate the optimal resources to forthcoming demands, leading to an imbalance workload over cloud platform, degrading the performance. This article introduces a secure and self‐adaptive resource allocation framework that addressed the mentioned issues and allocates the most suitable resources to users' applications while ensuring the deadline constraints. Further, the proposed framework is integrated with a metaheuristic algorithm named enhanced spider monkey optimization algorithm that is based on the intelligent foraging behavior of spider monkeys. The proposed algorithm finds an optimal resource for the user's application using the fission‐fusion approach and improves multiple influential parameters like time, cost, degree of load balancing, energy consumption, task rejection ratio and so on. The experimental CloudSim based results verified that the proposed framework performs superior to state of art approaches like PSO, GSA, ABC, and IMMLB.

Similar Papers
  • Research Article
  • 10.31449/inf.v49i36.11126
NSGA-II Based Multi-Objective Disaster Recovery Scheduling for Virtual Cloud Platforms
  • Dec 20, 2025
  • Informatica
  • Liwei Wang + 4 more

This study proposes a multi-objective optimization (MOO) method based on the Non-dominated Sorting Genetic Algorithm II (NSGA-II) to improve the virtual cloud platforms' disaster recovery scheduling efficiency. First, an MOO model is constructed. The model defines the resource parameters of physical nodes and virtual machines. Meanwhile, it designs a three-objective function to "minimize disaster recovery response time, maximize resource utilization, and minimize costs". Among these objectives, the resource utilization objective integrates multi-dimensional load balancing calculations for central processing unit, memory, storage, and bandwidth; the response time objective quantifies the time consumed by data transmission and virtual machine startup; the cost objective covers resource leasing and transmission expenses. At the same time, constraints related to resource capacity, virtual machine uniqueness, compatibility, and data consistency are incorporated into the model. For algorithm implementation, binary encoding directly represents the virtual machine-to-physical node allocation relationships xij. The design incorporates simulated binary crossover with a probability of 0.9 and polynomial mutation operators with a probability of 0.1, both adapted for virtual cloud environments. A selection mechanism of "non-dominated sorting + elite retention" is adopted. The solution process is optimized by combining the dynamic characteristics of disaster recovery scenarios (real-time update of resource status and dynamic adjustment of disaster levels). Threshold verification is used for resource capacity constraints; a hierarchical feedback method is applied to adjust the allocation strategy for data consistency constraints (which rely on the virtual machine delay difference |Ta-Tb| ≤ δ), ensuring the proportion of feasible solutions. The experiment simulates a large-scale cloud environment based on Google Cluster Data, setting three scenarios: small-scale node failure, large-scale regional disaster, and mixed failure. The proposed method is compared with the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and NSGA-III. The results show that NSGA-II achieves the optimal load balance degree. In the small-scale failure scenario, the load balance degree is 21.1% and 19.0% lower than that of MOEA/D and NSGA-III, respectively. In the large-scale disaster scenario, it is 35.7% and 25.0% lower. In the large-scale scenario, the response time of NSGA-II is 15.2%-28.3% shorter than that of the benchmark algorithms; its cost is 22.8% lower than that of MOEA/D (with significant optimization in resource leasing cost). Compared with previous studies, the innovations of this study are as follows. At the modeling level, it breaks through the single-dimensional load optimization of traditional post-disaster scheduling and adapts to the virtualization characteristics of cloud platforms. At the algorithm level, it solves the problem of insufficient dynamic adaptation of traditional NSGA-II in virtual cloud disaster recovery through scenario-based encoding and constraint processing. At the practical level, it fills the method gap between disaster recovery scheduling in virtual cloud scenarios and that in traditional physical scenarios. This study enriches the application of MOEA in cloud resource management and provides theoretical and technical support for improving the disaster recovery capability of cloud platforms.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/iaecst57965.2022.10062173
An Efficient Computing Task Offloading Strategy Based on Energy Consumption and Load Balancing Degree
  • Dec 9, 2022
  • Nan Hu + 6 more

Computing tasks offload of the edge computing has a significant impact on edge devices energy consumption and load balancing. To reduce the total cost, an offloading strategy that combines energy consumption and load balancing degree is proposed. A task partitioning model is given to perform fine-grained division of computing tasks. Furthermore, the energy consumption model of computing task offloading is obtained through the time delay model, and then the cost function of computing task offloading is constructed in combination with the load balancing degree and energy consumption. With the task offloading strategy, the minimum cost of task offloading is obtained under the multiple constraints, and the path of computing task offloading is determined. The simulation results demonstrate that the strategy can significantly improve the load balancing of the edge server and the overall performance of the edge server.

  • Research Article
  • Cite Count Icon 46
  • 10.1016/j.suscom.2018.06.002
PSO-COGENT: Cost and energy efficient scheduling in cloud environment with deadline constraint
  • Jun 20, 2018
  • Sustainable Computing: Informatics and Systems
  • Mohit Kumar + 1 more

PSO-COGENT: Cost and energy efficient scheduling in cloud environment with deadline constraint

  • Research Article
  • 10.1504/ijnvo.2019.10025022
A QoS-aware resource allocation framework in virtualised cloud environments
  • Jan 1, 2019
  • International Journal of Networking and Virtual Organisations
  • Yuan Tian

In cloud platforms, resource allocation service plays an important role for running user applications efficiently. However, current allocation mechanism in many cloud platforms only provides best-effort service for user's jobs, which means that user's quality-of-service (QoS) requirements can not be well guaranteed. In this paper, we present a novel QoS-aware resource allocation framework, which applies feedback control technique to achieve the goal of fair resource allocation between multiple virtual machine (VM) instances. Theoretical analysis of the proposed control model has proven that it can meet the feasibility and stability requirements. Experimental results conducted on a real-world cloud platform show that the proposed resource allocation framework can significantly improve the effective resource utilisation as well as the overall task execution efficiency. In addition, the proposed framework also shows better robustness when the cloud platform is in presence of highly dynamic workload.

  • Research Article
  • Cite Count Icon 11
  • 10.70470/khwarizmia/2025/005
Energy-Efficient Task Offloading and Resource Allocation in Mobile Cloud Computing Using Edge-AI and Network Virtualization
  • Jul 4, 2025
  • KHWARIZMIA
  • Raed A Hasan + 8 more

In the emerging landscape of Mobile Cloud Computing, energy efficiency and resource optimization are very vital challenges because most of the cloud and edge resources are increasing the execution of tasks in mobile devices. The focus of this paper is to propose a new energy-efficient task offloading and resource allocation framework in Edge-AI enabled network virtualization for dynamic management of computational tasks in mobile cloud environments. The framework allows real-time decisions of task offloading by comparing energy consumption of local execution versus edge processing and further looks at the received performance gains in executing that way. Then, based on the savings in energy and availability of the edge resources, it grades tasks for offloading. Network virtualization optimizes edge resource use by allocating resources depending on demand from tasks, leading to a reduction in latency for increased processing efficiency. The simulation results proved that our approach could really cut down energy consumption on mobile devices, with low latency and high rates of task success, better than cloud-only offloading through static edge computing methods and traditional dynamic programming.

  • Research Article
  • Cite Count Icon 15
  • 10.11591/ijeecs.v18.i2.pp1081-1088
Optimization model for QoS based task scheduling in cloud computing environment
  • May 1, 2020
  • Indonesian Journal of Electrical Engineering and Computer Science
  • Sirisha Potluri + 1 more

Shortest job first task scheduling algorithm allocates task based on the length of the task, i.e the task that will have small execution time will be scheduled first and the longer tasks will be executed later based on system availability. Min- Min algorithm will schedule short tasks parallel and long tasks will follow them. Short tasks will be executed until the system is free to schedule and execute longer tasks. Task Particle optimization model can be used for allocating the tasks in the network of cloud computing network by applying Quality of Service (QoS) to satisfy user’s needs. The tasks are categorized into different groups. Every one group contains the tasks with attributes (types of users and tasks, size and latency of the task). Once the task is allocated to a particular group, scheduler starts assigning these tasks to accessible services. The proposed optimization model includes Resource and load balancing Optimization, Non-linear objective function, Resource allocation model, Queuing Cost Model, Cloud cost estimation model and Task Particle optimization model for task scheduling in cloud computing environement. The main objectives identified are as follows. To propose an efficient task scheduling algorithm which maps the tasks to resources by using a dynamic load based distributed queue for dependent tasks so as to reduce cost, execution and tardiness time and to improve resource utilization and fault tolerance. To develop a multi-objective optimization based VM consolidation technique by considering the precedence of tasks, load balancing and fault tolerance and to aim for efficient resource allocation and performance of data center operations. To achieve a better migration performance model to efficiently model the requirements of memory, networking and task scheduling. To propose a QoS based resource allocation model using fitness function to optimize execution cost, execution time, energy consumption and task rejection ratio and to increase the throughput. QoS parameters such as reliability, availability, degree of imbalance, performance and SLA violation and response time for cloud services can be used to deliver better cloud services.

  • Research Article
  • Cite Count Icon 7
  • 10.60087/jaigs.v1i1.243
Dynamic Resource Allocation and Energy Optimization in Cloud Data Centers Using Deep Reinforcement Learning
  • Jan 22, 2024
  • Journal of Artificial Intelligence General science (JAIGS) ISSN:3006-4023
  • Haoran Li + 3 more

This paper presents a new deep learning (DRL) framework for resource allocation and optimization in cloud computing. The proposed method leverages the multi-agent DRL architecture to address extensive decision-making processes in large cloud environments. We formulate the problem based on Markov's decision, creating a state space that includes the use of resources, work characteristics, and energy. The workspace comprises VM placement, migration, and physical power state determination. Careful reward work balances energy, efficiency, and resource utilization goals. We modify the Proximal Policy Optimization algorithm to handle the heterogeneous workspace and include advanced training techniques such as priority recursion and learning data. Simulations using real-world signals show that our method outperforms conventional and single-agent DRL methods, achieving a 25% reduction in the usage of electricity while maintaining a 2.5% SLA violation. The framework is adaptable to different work patterns and scales well to large data set environments. A global study further proves the proposal's validity, showing a significant improvement in energy consumption and efficiency compared to commercial management systems already there.

  • Research Article
  • 10.1002/sat.1567
Task‐Oriented Multiobjective Computation Offloading in LEO Mega‐Constellation Edge Computing Network
  • May 19, 2025
  • International Journal of Satellite Communications and Networking
  • Qingxiao Xiu + 4 more

ABSTRACTThe low earth orbit (LEO) mega‐constellation network, with its extensive coverage and low‐latency characteristics, offers new opportunities to meet the demands of computation‐intensive and latency‐sensitive applications in remote areas. However, with the increasing complexity of task offloading demands and the limited availability of satellite resources, resource management and scheduling face significant challenges. To tackle these challenges, we propose a satellite‐terrestrial integrated LEO mega‐constellation edge computing network (LMCECN) management architecture, which enables satellite‐terrestrial resource allocation and task offloading through the cooperative scheduling of primary and secondary satellites. Based on this architecture, we design a deep reinforcement learning‐based task‐oriented mega‐constellation edge offloading (TOMEO) scheme, which significantly improves task offloading efficiency by incorporating task sorting and resource clustering preprocessing mechanisms. Furthermore, a multiobjective double dueling noisy deep Q‐network (DDNDQN) algorithm is introduced, which comprehensively considers multiple optimization objectives, including task completion rate, load balancing degree, task delay, and energy consumption, further enhancing task offloading efficiency. The experimental results demonstrate that the proposed offloading scheme outperforms the baseline schemes across all optimization objectives and improves the task offloading performance.

  • Research Article
  • Cite Count Icon 58
  • 10.1016/0003-3472(64)90108-3
A field study of the activities of howler monkeys
  • Jan 1, 1964
  • Animal Behaviour
  • Irwin S Bernstein

A field study of the activities of howler monkeys

  • Conference Article
  • Cite Count Icon 3
  • 10.1109/icc42927.2021.9500548
ONSRA: an Optimal Network Selection and Resource Allocation Framework in multi-RAT Systems
  • Jun 1, 2021
  • Alaa Awad Abdellatif + 4 more

The rapid production of mobile and wearable devices along with the wireless applications boom is continuing to evolve everyday. This motivates network operators to integrate and exploit wireless spectrum across multiple radio access networks to cope with such intensive demand, while improving quality of service. However, it is crucial to develop innovative network selection techniques that consider heterogeneous networks characteristics, while meeting applications' quality requirements. Thus, this paper develops an optimal network selection with resource allocation scheme over heterogeneous networks that aims to optimize the latency, cost, and energy consumption, while accounting for data compression at the edge. Indeed, our framework could significantly enhance the performance of wireless healthcare systems by enabling data transfer from patients edge nodes to the cloud in cost-effective and energy-efficient manner, while maintaining strict Quality of Service (QoS) requirements of health applications. Our simulation results depict that our solution significantly outperforms state-of- the-art techniques in terms of energy consumption, latency, and cost.

  • Research Article
  • Cite Count Icon 2
  • 10.33889/ijmems.2022.7.5.046
A Layer & Request Priority-based Framework for Dynamic Resource Allocation in Cloud- Fog - Edge Hybrid Computing Environment
  • Oct 1, 2022
  • International Journal of Mathematical, Engineering and Management Sciences
  • Sandip Kumar Patel + 1 more

One of the most promising frameworks is the fog computing paradigm for time-sensitive applications such as IoT (Internet of Things). Though it is an extended type of computing paradigm, which is mainly used to support cloud computing for executing deadline-based user requirements in IoT applications. However, there are certain challenges related to the hybrid IoT -cloud environment such as poor latency, increased execution time, computational burden and overload on the computing nodes. This paper offers A Layer & Request priority-based framework for Dynamic Resource Allocation Method (LP-DRAM), a new approach based on layer priority for ensuring effective resource allocation in a fog-cloud architecture. By performing load balancing across the computer nodes, the suggested method achieves an effective resource allocation. Unlike conventional resource allocation techniques, the proposed work assumes that the node type and the location are not fixed. The tasks are allocated based on two constrain, duration and layer priority basis i.e, the tasks are initially assigned to edge computing nodes and based on the resource availability in edge nodes, the tasks are further allocated to fog and cloud computing nodes. The proposed approach's performance was analyzed by comparing it to existing methodologies such as First Fit (FF), Best Fit (BF), First Fit Decreasing (FFD), Best Fit Decreasing (BFD), and DRAM techniques to validate the effectiveness of the proposed LP-DRAM.

  • Research Article
  • Cite Count Icon 31
  • 10.1016/j.jnca.2023.103647
Space-aerial-ground-sea integrated networks: Resource optimization and challenges in 6G
  • Apr 24, 2023
  • Journal of Network and Computer Applications
  • Sana Sharif + 2 more

Space-aerial-ground-sea integrated networks: Resource optimization and challenges in 6G

  • Research Article
  • 10.52458/23485477.2025.v12.iss1.kp.a3
A Multi-Objective Optimization Framework for QoS-Driven Service and Channel Allocation in Dynamic Wireless Computer Network Environments
  • Jan 1, 2025
  • Kaav International Journal of Science, Engineering & Technology:A Peer Review Quarterly Journal
  • Hareram Kumar + 1 more

Efficient resource allocation in large-scale networking environments is critical for ensuring optimal Quality of Service (QoS) across diverse applications. This study proposes an innovative multi-objective optimization model for dynamic service and channel allocation, addressing key QoS parameters such as latency, energy consumption, and cost. The model employs a weight-based optimization framework, enabling real-time adaptive resource assignment while balancing trade-offs among QoS constraints, including channel capacity and service demand. The proposed model was evaluated using MATLAB simulations, demonstrating an 18% improvement in load balancing, a 22% reduction in latency, and a 15% improvement in energy efficiency. The numerical results validate the effectiveness of the proposed approach in high-demand scenarios, showcasing improvements in network utilization and seamless service delivery. This study establishes a scalable framework for QoS-driven resource allocation in 5G networks, IoT systems, and cloud computing environments, ensuring optimal service performance under dynamic conditions.

  • Research Article
  • 10.3390/electronics14142887
A Framework for Joint Beam Scheduling and Resource Allocation in Beam-Hopping-Based Satellite Systems
  • Jul 18, 2025
  • Electronics
  • Jinfeng Zhang + 4 more

With the rapid development of heterogeneous satellite networks integrating geostationary earth orbit (GEO) and low earth orbit (LEO) satellite systems, along with the significant growth in the number of satellite users, it is essential to consider frequency compatibility and coexistence between GEO and LEO systems, as well as to design effective system resource allocation strategies to achieve efficient utilization of system resources. However, existing beam-hopping (BH) resource allocation algorithms in LEO systems primarily focus on beam scheduling within a single time slot, lacking unified beam management across the entire BH cycle, resulting in low beam-resource utilization. Moreover, existing algorithms often employ iterative optimization across multiple resource dimensions, leading to high computational complexity and imposing stringent requirements on satellite on-board processing capabilities. In this paper, we propose a BH-based beam scheduling and resource allocation framework. The proposed framework first employs geographic isolation to protect the GEO system from the interference of the LEO system and subsequently optimizes beam partitioning over the entire BH cycle, time-slot beam scheduling, and frequency and power resource allocation for users within the LEO system. The proposed scheme achieves frequency coexistence between the GEO and LEO satellite systems and performs joint optimization of system resources across four dimensions—time, space, frequency, and power—with reduced complexity and a progressive optimization framework. Simulation results demonstrate that the proposed framework achieves effective suppression of both intra-system and inter-system interference via geographic isolation, while enabling globally efficient and dynamic beam scheduling across the entire BH cycle. Furthermore, by integrating the user-level frequency and power allocation algorithm, the scheme significantly enhances the total system throughput. The proposed progressive optimization framework offers a promising direction for achieving globally optimal and computationally tractable resource management in future satellite networks.

  • Research Article
  • Cite Count Icon 9
  • 10.1111/exsy.13362
Multi agent deep reinforcement learning for resource allocation in container‐based clouds environments
  • Jun 10, 2023
  • Expert Systems
  • S Nagarajan + 5 more

Virtualization enables the deployment of several virtual servers on the same physical layer, critical component of the cloud. As cloud services advance, more apps that use repositories are developed, which adds to the overburden. Containers have evolved into the most reliable and lightweight virtualization technology for cloud services thanks to their flexible sorting, mobility, and scalability. In container‐based clouds, containers can potentially cut data centre energy usage more than virtual machines (VMs) do. Containers are less energy intensive than VMs. Resource allocation is the most prevalent method in cloud systems. However, resource allocation in container‐based clouds (RAC) is innovative and complicated due to its two‐level architecture. This includes the pairing of virtual machines and physical computers with containers. In cloud container services, planner components are essential. This lowers expenses while improving the performance and variety of workloads using cloud resources. The cloud infrastructure resource allocation framework is gaining popularity since it is energy‐efficient and focuses on cloud data management to maximize income and minimize costs. In this paper, we proposed a deep learning‐based architecture capable of achieving high data centre energy efficiency and preventing Service Level Agreement (SLA) violations from deploying green cloud resources. This research describes a hybrid optimum and multi‐agent deep reinforcement learning (MADRL) technique for dynamic task scheduling (DTS) in a container cloud environment. The MADRL‐DTS model for the RAC problem considers VM overheads, VM types, and an affinity restriction. Then, to address the RAC issue, we develop a DTS hyper‐heuristic technique. MADRL‐RAC may give allocation rules by recognizing workload trends and VM types from previous workload traces. Compared to modern procedures, the results demonstrate a significant reduction in energy consumption. The evaluation for energy‐efficient resource allocation is tested in several virtualized environments to get a high power usage effectiveness and CPU usage.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.