Dynamic task offloading and resource allocation for energy-harvesting end–edge–cloud computing systems
Dynamic task offloading and resource allocation for energy-harvesting end–edge–cloud computing systems
233
- 10.1109/tii.2018.2843365
- Oct 1, 2018
- IEEE Transactions on Industrial Informatics
21
- 10.1109/tmc.2023.3246462
- Mar 1, 2024
- IEEE Transactions on Mobile Computing
13
- 10.1109/tnsm.2023.3266238
- Dec 1, 2023
- IEEE Transactions on Network and Service Management
139
- 10.1109/tii.2020.2978946
- Mar 13, 2020
- IEEE Transactions on Industrial Informatics
11
- 10.23919/date.2018.8341998
- Mar 1, 2018
8
- 10.1109/jiot.2023.3309136
- Feb 15, 2024
- IEEE Internet of Things Journal
881
- 10.1109/tpds.2014.2316834
- Mar 26, 2015
- IEEE Transactions on Parallel and Distributed Systems
7
- 10.1109/tsc.2023.3296742
- Nov 1, 2023
- IEEE Transactions on Services Computing
82
- 10.1109/tmc.2022.3223119
- Jan 1, 2024
- IEEE Transactions on Mobile Computing
101
- 10.1109/tcc.2022.3163750
- Apr 1, 2023
- IEEE Transactions on Cloud Computing
- Research Article
46
- 10.1109/jiot.2021.3051031
- Jan 14, 2021
- IEEE Internet of Things Journal
In the foreseeable future, the rapid growth of devices in the Internet of Things (IoT) will make it difficult for 5G networks to ensure sufficient network resources. 6G technology has attracted increasing attention, bringing new design concepts to the dynamic real-time resource allocation. The resource requirements of devices are usually variable, so a dynamic resource allocation method is needed to ensure the smooth execution of tasks. Therefore, this article first designs a 6G-enabled massive IoT architecture that supports dynamic resource allocation. Then, a dynamic nested neural network is constructed, which adjusts the nested learning model structure online to meet training requirements of dynamic resource allocation. An AI-driven collaborative dynamic resource allocation (ACDRA) algorithm is proposed based on the nested neural network combined with Markov decision process training for 6G-enabled massive IoT. Extensive simulations have been carried out to evaluate ACDRA in terms of several performance criteria, including resource hit rate and decision delay time. The results validated that ACDRA improves the average resource hit rate by about 8% and reduces the average decision delay time by about 7% compared with three reference existing algorithms.
- Research Article
161
- 10.1109/mwc.2006.1678164
- Aug 1, 2006
- IEEE Wireless Communications
Driven by the increasing popularity of wireless broadband services, future wireless systems will witness a rapid growth of high-data-rate applications with very diverse quality of service requirements. To support such applications under limited radio resources and harsh wireless channel conditions, dynamic resource allocation, which achieves both higher system spectral efficiency and better QoS, has been identified as one of the most promising techniques. In particular, jointly optimizing resource allocation across adjacent and even nonadjacent layers of the protocol stack leads to dramatic improvement in overall system performance. In this article we provide an overview of recent research on dynamic resource allocation, especially for MIMO and OFDM systems. Recent work and open issues on cross-layer resource allocation and adaptation are also discussed. Through this article, we wish to show that dynamic resource allocation will become a key feature in future wireless communications systems as the subscriber population and service demands continue to expand
- Conference Article
4
- 10.1109/wf-iot48130.2020.9221234
- Jun 1, 2020
Wireless networks used for Internet of Things (IoT) are expected to largely involve cloud-based computing and processing. Softwarised and centralised signal processing and network switching in the cloud enables flexible network control and management. In a cloud environment, dynamic computational resource allocation is essential to save energy while maintaining the performance of the processes. The stochastic features of the Central Processing Unit (CPU) load variation as well as the possible complex parallelisation situations of the cloud processes makes the dynamic resource allocation an interesting research challenge. This paper models this dynamic computational resource allocation problem into a Markov Decision Process (MDP) and designs a model-based reinforcement-learning agent to optimise the dynamic resource allocation of the CPU usage. Value iteration method is used for the reinforcement-learning agent to pick up the optimal policy during the MDP. To evaluate our performance we analyse two types of processes that can be used in the cloud-based IoT networks with different levels of parallelisation capabilities, i.e., Software-Defined Radio (SDR) and Software-Defined Networking (SDN). The results show that our agent rapidly converges to the optimal policy, stably performs in different parameter settings, outperforms or at least equally performs compared to a baseline algorithm in energy savings for different scenarios.
- Research Article
1
- 10.17762/turcomat.v12i2.1191
- Apr 11, 2021
- Turkish Journal of Computer and Mathematics Education (TURCOMAT)
Cloud computing is an on-demand service because it offers dynamic flexible resource allocation for reliable and guaranteed services in pay as-you-use manner. Because of the consistently increasing demands of the clients for services or resources, it gets hard to allocate resources accurately to the client demands to satisfy their solicitations and also to take care of the Service Level Agreements (SLA) gave by the service suppliers. Dynamic resource allocation problem is one of the most challenging problems in the resource management problems. The dynamic resource allocation in cloud computing has attracted attention of the research network in the last couple of years. Many researchers around the world have thought of new ways of facing this challenge. Ad-hoc parallel data handling has arisen to be one of the executioner applications for Infrastructure-as-a-Service (IaaS) cloud. Number of Cloud supplier companies has started to incorporate frameworks for parallel data handling in their item which making it easy for clients to access these services and to convey their programs. The handling frameworks which are at present utilized have been intended for static and homogeneous bunch arrangements. So the allocated resources may be inadequate for large parts of the submitted tasks and unnecessarily increase preparing cost and time. Again because of opaque nature of cloud, static allocation of resources is conceivable, yet the other way around in dynamic situations. The proposed new generic data handling framework is expected to expressly misuse the dynamic resource allocation in cloud for task scheduling and execution.
- Research Article
17
- 10.1109/tcomm.2017.2783974
- Aug 1, 2018
- IEEE Transactions on Communications
Scalable video streaming over femtocell networks relying on two-tier spectrum-sharing is designed for coping with time-varying channel conditions, stringent video QoS requirements as well as with strong cross-tier interference between the over-sailing macro- and the femtocells. Dynamic video layer selection and resource allocation are invoked to enable the adaptation of the scalable video streaming service to the dynamics of both channel quality and interference price fluctuations. We formulate the design as a constrained stochastic optimization problem, which strikes a compelling compromise between the perceivable quality of experience and the monetary implications of the interference. Since the time scale of resource allocation is more short term than that of the video layer selection, we decompose the original long-term utility optimization problem into a pair of readily tractable subproblems with the aid of two different time-scales by invoking the powerful technique of Lyapunov drift and optimization. By exploiting the specific structure of these subproblems, low-complexity algorithms are derived for dynamic video layer selection and resource allocation, which rely on the near-instantaneously available information rather than on any prior statistical knowledge. Finally, we derive the analytical bounds of the theoretically achievable performance. Experimental results are presented for characterizing the performance attained.
- Research Article
1
- 10.1080/0954898x.2024.2334282
- Apr 8, 2024
- Network (Bristol, England)
The rapid deployment of 5G networks necessitates innovative solutions for efficient and dynamic resource allocation. Current strategies, although effective to some extent, lack real-time adaptability and scalability in complex, dynamically-changing environments. This paper introduces the Dynamic Resource Allocator using RL-CNN (DRARLCNN), a novel machine learning model addressing these shortcomings. By merging Convolutional Neural Networks (CNN) for feature extraction and Reinforcement Learning (RL) for decision-making, DRARLCNN optimizes resource allocation, minimizing latency and maximizing Quality of Service (QoS). Utilizing a state-of-the-art “5G Resource Allocation Dataset”, the research employs Python, TensorFlow, and OpenAI Gym to implement and test the model in a simulated 5 G environment. Results demonstrate the effectiveness of DRARLCNN, showcasing an impressive R 2 score of 0.517, MSE of 0.035, and RMSE of 0.188, surpassing existing methods in allocation efficiency and latency. The DRARLCNN model not only outperforms existing methods in allocation efficiency and latency but also sets a new benchmark for future research in dynamic 5G resource allocation. Through its innovative approach and promising results, DRARLCNN opens avenues for further advancements in optimizing resource allocation within dynamic 5G networks.
- Research Article
- 10.55730/1300-0632.3826
- Mar 1, 2022
- Turkish Journal of Electrical Engineering and Computer Sciences
Cloud data centres, which are characteristic of dynamic workloads, if not optimized for energy consumption, may lead to increased heat dissipation and eventually impact the environment adversely. Consequently, optimizing the usage of energy has become a hard requirement in today's cloud data centres wherein the major part of energy consumption is mostly attributed to computing and cooling systems. Motivated by which this paper proposes an online algorithm for dynamic resource allocation, namely, temperature aware online dynamic resource allocation algorithm (TARA). TARA demonstrates a novel algorithm design to adapt dynamic resource allocation based on the temperature of a data centre using computational fluid dynamics (CFD). Also, TARA demonstrates a new dynamic resource reclaim strategy for making efficient resource allocations leading to efficient energy consumptions in dynamic environments. The proposed algorithm provides optimal resource allocation considering energy efficiency without being overwhelmed by online dynamic workloads. The optimal energy-efficient dynamic resource allocation for online workloads eventually optimizes the computing and cooling energy consumption. We show through theoretical analysis the correctness, efficiency and optimality bounds given as $TARA(P) \leq 2OPT(P)$, relative to the optimal solution provided by offline dynamic resource allocation algorithm $(OPT(P))$. We show through empirical analysis that the proposed method is efficient and significantly saves energy by 26\% when the data centre utilization is 100\% compared to batched reclaim. The performance analysis shows significant improvement in optimizing computing and cooling efficiency. TARA can be used in multiple areas of on-demand dynamic resource allocation in cloud computing like resource allocation for virtual machine creation, resource allocation for virtual machine migrations, and virtual resources assignment for elastic cloud applications.
- Research Article
1
- 10.1016/j.comnet.2024.110876
- Oct 24, 2024
- Computer Networks
Dynamic resource allocation and offloading optimization for network slicing in B5G multi-tier multi-tenant systems
- Conference Article
- 10.1117/12.2653712
- Dec 8, 2022
With the continuous development of new media and network information technology, more and more reconfigurable resources of network digital media are needed to build a reconfigurable resource allocation model in network space. The dynamic allocation method of network digital media reconfigurable resources based on multi-source feature fusion cluster analysis is feasible to a certain extent. Combined with the result of feature extraction, the dynamic allocation of reconfigurable resources is realized. Research on the reconfigurable resource allocation method of network digital media is of great significance in improving the utilization capacity of network resources. Traditional method, the dynamic network reconfigurable digital media resources allocation methods mainly include dynamic allocation method based on the characteristics of correlation analysis, PCA principal component analysis method of dynamic allocation and dynamic allocation of fusion k-means clustering method and so on, USES the statistical features extraction and autocorrelation detection, realize reconfigurable dynamic allocation of resources, However, the traditional method of resource allocation has poor fitness and weak feature identification ability, so it needs to be changed. This paper mainly describes the dynamic allocation method of reconfigurable resources of network digital media, aiming at strengthening the allocation of network digital media resources, so as to further improve the dynamic allocation ability of reconfigurable resources and speed up the application of network digital media.
- Conference Article
24
- 10.1109/pervasive.2015.7087186
- Jan 1, 2015
Cloud computing becomes well liked among cloud users by contribution of various resources. This is on demand as it gives dynamic flexible resource allocation, for reliable service in pay as use manner to cloud service users. Cloud computing is not application oriented and this is a service oriented. In cloud computing, dynamic flexibility in resource allocation propose by virtualization technology. Virtualization technology provides server resources allocation dynamically based on the application request. In this paper, we proposed dynamic resource allocation system which allocated resources to cloud user. We used skewness which measure uneven utilization of multiple resources of each VMs and according skew value load balance across VMs. By minimizing skew value of each VM, we can easily combine different multiple resources and improve resource utilization of server. These proposed algorithms we prevent overload by effective load balancing and future load prediction and achieve optimize performance in terms of server resource utilization with minimum energy consumption.
- Conference Article
6
- 10.1109/iscon52037.2021.9702325
- Oct 22, 2021
Today's era is technological era with cloud computing at the top level. It offers on demand service to the users. Additionally, it also allows for dynamic flexible resource allocation and works on the rule of pay as-you-use manner. In these many users can demand services at a same time so there is a provision that users are efficiently provided with the resources. Based on topology and dynamic resource allocation different optimal resource allocation are being reviewed in this paper which can be used for linear strategy of scheduling and parallel processing for [2] resource allocation. Discussion about the maximum profit to the cloud service providers by user satisfaction, what should be avoid having optimal resource strategy, and advantages disadvantages caused by resource allocation usage in cloud computing.
- Research Article
2
- 10.1007/s11038-024-09557-5
- Sep 16, 2024
- Discover Space
A dynamic low earth orbit multi satellite hopping beam resource allocation method based on time slicing is proposed to address the issue of changing coverage and beam interference caused by satellite mobility in multi satellite and multi beam communication scenarios. The method involves six processes: (1) Establishing a low earth orbit satellite system with three identical configuration parameters and motion speed parameters; (2)Assign a value to the time slicing weight coefficient and use the principle of load balancing to minimize the difference in demand for dual star services as the optimization objective; (3) Obtain initial power resource allocation with maximum system capacity as the optimization objective; (4) Considering the mobility of low earth orbit satellites, updating weight coefficients and dynamically allocating power resources, a time resource dynamic allocation matrix and the optimal service cell set are obtained; (5) Update the independent coverage cell set of the third low earth orbit satellite, use the Lagrange equation of the weighted fair objective function to obtain the time resource allocation optimization matrix, and use one-way search to determine the optimal service cell set; (6) With the goal of maximizing system capacity, the optimal solution of the power resource allocation matrix for the third low earth orbit satellite is obtained. The system throughput is introduced to intuitively measure system performance, and adjustments are made based on the overlap of Samsung coverage. The resource allocation method is based on a fairness model, considering load balancing and dynamic coverage range changes. On the basis of improving the overall system throughput, it ensures the fairness of resource allocation and allocates time, frequency, and power resources to multiple satellites with overlapping coverage, which helps to improve the throughput and resource utilization of satellite systems.
- Conference Article
1
- 10.1109/iciii.2011.304
- Nov 1, 2011
This paper conduct research on Dynamic multi-project human resource allocation methods from the four aspects: Firstly, it established project evaluation system by using hierarchical fuzzy evaluation method based on fuzzy theory, Secondly, it quantified staff capacity and estimated staff task efficiency based on multidimensional model and Fleischmann analysis system, Thirdly, it developed multi-project Human Resource & Time Allocation formula based on knapsack problem principle, Finally, it made Dynamic Multi-project Human Resource Allocation optimization. The simulation example showed that method research for dynamic multi-project human resource allocation made the human resource allocation more rational, and reduced project cost which was a scientific and efficiency method to scheduling human resources.
- Conference Article
7
- 10.1109/icumt.2012.6459788
- Oct 1, 2012
Resource allocation in a heterogeneous network is a complex task due to diversified requirements of its member networks. In a heterogeneous network different subnetworks cooperate with each other to offer services to various applications through smart terminals. In this paper we present a dynamic radio resource allocation technique which allocates transmission bandwidth to subnetworks in a heterogeneous network in a cooperative manner to maximize the network capacity, application QoS and bandwidth utilization. We investigate a LTE/WLAN based infrastructure based heterogeneous network where transmission resources are allocated to its member network in a spectrally efficient manner to maximize the throughput of the combined network. The proposed radio resource allocation algorithm uses a traffic prediction technique to estimate the expected load of member networks and then allocate the transmission resources to those networks in a cooperative manner. The performance of the proposed algorithm was analyzed using an OPNET simulation model. Performance results show that the proposed cooperative resource allocation technique could significantly improve the application QoS of LTE and WLAN users
- Conference Article
3
- 10.1109/glocom.2016.7841762
- Dec 1, 2016
We highlight the advantages of dynamic wavelength allocation and path rerouting in hybrid optical flow-switched data center networks compared with static electronic packet-switched data center networks. Current electronic packet-switched networks perform routing based on lookup tables and shortest path routing over high speed optical connections. However, load balancing and path rerouting are not designed in coordination with the optical data plane. In this work, we design and analyze the performance of load balancing and resource allocation in hybrid flow data center architectures. In the hybrid flow architecture, the transport layer and control plane are responsible for load balancing, congestion control and reliable data delivery in wavelength division multiplexed networks. The transport layer abstracts the underlying network and data link layers from user applications. Applications are unaware of whether flows are transmitted via the electronic data plane, optical data plane, or both, where the relevant metric for applications is transaction delay. We show that the data center architecture that minimizes delay in the presence of elephant flows and dynamic loads is the hybrid flow architecture with dynamic resource allocation.
- New
- Research Article
- 10.1016/j.sysarc.2025.103528
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103524
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103577
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103535
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103531
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103536
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103520
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103582
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/j.sysarc.2025.103523
- Nov 1, 2025
- Journal of Systems Architecture
- New
- Research Article
- 10.1016/s1383-7621(25)00283-8
- Nov 1, 2025
- Journal of Systems Architecture
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.