Cross-domain resources optimization for hybrid edge computing networks: federated DRL approach
Cross-domain resources optimization for hybrid edge computing networks: federated DRL approach
- Conference Article
6
- 10.1109/iccworkshops50388.2021.9473628
- Jun 1, 2021
Multi-access edge computing (MEC) is a distributed computing framework that provides computation capability and data storage at the edge of the network to save bandwidth and reduce latency. However, the computing capacity of the MEC system could be insufficient when an excessive number of tasks are offloaded for execution. To alleviate the excessive burden on MECSs and tap the underutilized resources of WDs, we investigate the computation offloading and shunting problem in the hybrid edge computing (HEC) network. In this paper, we formulate an optimization problem to minimize the average weighted sum of total time delay and energy consumption. Due to the high computational complexity and dimensionality, we propose the deep reinforcement learning-based computation offloading and shunting (DCOS) algorithm to solve this problem. Finally, we validate the convergence property and evaluate the time complexity of the DCOS algorithm. Compared with the other algorithms, the simulation results show that the DCOS algorithm can reduce the average weighted cost significantly.
- Research Article
3
- 10.1155/2022/9230521
- Aug 13, 2022
- Mobile Information Systems
In the Internet of Vehicle (IoV), the limited computing capacity of vehicles hardly processes the intensive computation tasks locally. The computation tasks can be offloaded to multiaccess edge computing (MEC) servers for processing, where MEC provides the required computing capacity to the nearby vehicles. In this paper, we consider a scenario where there are cooperation and competition between vehicles, the offloading decision of any vehicle will affect the decisions of the others, and the computing resource allocation strategies by MEC will dynamically change. Therefore, we propose a joint optimization scheme for computation offloading decisions and computing resource allocation based on decentralized multiagent deep reinforcement learning. The proposed scheme learns the optimal actions to minimize the total weighted cost which is designed as the vehicles’ satisfaction based on the type of stochastic arrival tasks and dynamic interaction between MEC server and vehicles within different RSUs coverages. The numerical results show that the proposed algorithms based on decentralized multiagent deep deterministic policy gradient (DDPG) which is named De-DDPG can autonomously learn the optimal computation offloading and resource allocation policy without a priori knowledge and outperform the other three baseline algorithms in terms of the rewards.
- Research Article
20
- 10.1002/cpe.7995
- Dec 22, 2023
- Concurrency and Computation: Practice and Experience
SummaryNetwork architects and engineers face challenges in meeting the increasing complexity and low‐latency requirements of various services. To tackle these challenges, multi‐access edge computing (MEC) has emerged as a solution, bringing computation and storage resources closer to the network's edge. This proximity enables low‐latency data access, reduces network congestion, and improves quality of service. Effective resource allocation is crucial for leveraging MEC capabilities and overcoming limitations. However, traditional approaches lack intelligence and adaptability. This study explores the use of deep reinforcement learning (DRL) as a technique to enhance resource allocation in MEC. DRL has gained significant attention due to its ability to adapt to changing network conditions and handle complex and dynamic environments more effectively than traditional methods. The study presents the results of applying DRL for efficient and dynamic resource allocation in MEC Computing, optimizing allocation decisions based on real‐time environment and user demands. By providing an overview of the current research on resource allocation in MEC using DRL, including components, algorithms, and the performance metrics of various DRL‐based schemes, this review article demonstrates the superiority of DRL‐based resource allocation schemes over traditional methods in diverse MEC conditions. The findings highlight the potential of DRL‐based approaches in addressing challenges associated with resource allocation in MEC.
- Research Article
1
- 10.1002/ett.70003
- Nov 1, 2024
- Transactions on Emerging Telecommunications Technologies
ABSTRACTIn multi‐access edge computing (MEC), computational task offloading of mobile terminals (MT) is expected to provide the green applications with the restriction of energy consumption and service latency. Nevertheless, the diverse statuses of a range of edge servers and mobile terminals, along with the fluctuating offloading routes, present a challenge in the realm of computational task offloading. In order to bolster green applications, we present an innovative computational task offloading model as our initial approach. In particular, the nascent model is constrained by energy consumption and service latency considerations: (1) Smart mobile terminals with computational capabilities could serve as carriers; (2) The diverse computational and communication capacities of edge servers have the potential to enhance the offloading process; (3) The unpredictable routing paths of mobile terminals and edge servers could result in varied information transmissions. We then propose an improved deep reinforcement learning (DRL) algorithm named PS‐DDPG with the prioritized experience replay (PER) and the stochastic weight averaging (SWA) mechanisms based on deep deterministic policy gradients (DDPG) to seek an optimal offloading mode, saving energy consumption. Next, we introduce an enhanced deep reinforcement learning (DRL) algorithm named PS‐DDPG, incorporating the prioritized experience replay (PER) and stochastic weight averaging (SWA) techniques rooted in deep deterministic policy gradients (DDPG). This approach aims to identify an efficient offloading strategy, thereby reducing energy consumption. Fortunately, algorithm is proposed for each MT, which is responsible for making decisions regarding task partition, channel allocation, and power transmission control. Our developed approach achieves the ultimate estimation of observed values and enhances memory via write operations. The replay buffer holds data from previous time slots to upgrade both the actor and critic networks, followed by a buffer reset. Comprehensive experiments validate the superior performance, including stability and convergence, of our algorithm when juxtaposed with prior studies.
- Research Article
34
- 10.1016/j.comnet.2021.108356
- Jul 26, 2021
- Computer Networks
Smart computational offloading for mobile edge computing in next-generation Internet of Things networks
- Conference Article
5
- 10.1109/gcwkshps52748.2021.9682057
- Dec 1, 2021
Efficient administration of computing resources in Multi-access Edge Computing (MEC) networks is a very active research topic. Task sharing in particular, is one of the capital problems with respect to MEC architectures although is mostly addressed from the end-user standpoint. Moreover, standard MEC frameworks do not consider task sharing schemes for computing servers at the edge level even though mechanisms of this kind could increase the utilization of resources in MEC setups. The integration of blockchain technologies into multi server cloud and edge computing solutions has started to gain steam in recent times, largely in part for its potential to enhance the functionality, security, and privacy of cloud-based architectures. In this context, this paper presents a task sharing mechanism for MEC servers with precedent-dependent tasks based on the Hyperledger Fabric framework. Hyperledger Fabric is a blockchain platform that provides a light-weight and distributed interaction playing field for the servers that mitigates the security and privacy concerns related to the collaboration scheme. For sharing tasks among the servers, the call dependencies of the tasks for a user application are captured by a control-flow graph and an optimization problem is formulated to obtain the allocation of tasks. Numerical results on the performance of the proposed task sharing mechanism are presented.
- Research Article
31
- 10.3390/s20164360
- Aug 5, 2020
- Sensors
The dissemination of false messages in Internet of Vehicles (IoV) has a negative impact on road safety and traffic efficiency. Therefore, it is critical to quickly detect fake news considering news timeliness in IoV. We propose a network computing framework Quick Fake News Detection (QcFND) in this paper, which exploits the technologies from Software-Defined Networking (SDN), edge computing, blockchain, and Bayesian networks. QcFND consists of two tiers: edge and vehicles. The edge is composed of Software-Defined Road Side Units (SDRSUs), which is extended from traditional Road Side Units (RSUs) and hosts virtual machines such as SDN controllers and blockchain servers. The SDN controllers help to implement the load balancing on IoV. The blockchain servers accommodate the reports submitted by vehicles and calculate the probability of the presence of a traffic event, providing time-sensitive services to the passing vehicles. Specifically, we exploit Bayesian Network to infer whether to trust the received traffic reports. We test the performance of QcFND with three platforms, i.e., Veins, Hyperledger Fabric, and Netica. Extensive simulations and experiments show that QcFND achieves good performance compared with other solutions.
- Research Article
44
- 10.1109/access.2018.2833619
- Jan 1, 2018
- IEEE Access
With the widespread use of smart mobile devices, the exponential growth of mobile Internet traffic and newly emerging services, such as Internet of Things, virtual reality/augmented reality, and serious games, the network performance requirements for delay and bandwidth are increasing. The inherent long-distance propagation and possible network congestion of mobile cloud computing may lead to excessive latency, which cannot satisfy the new delay-sensitive mobile applications. The proximity of edge computing provides the possibility of low-latency access and raises increasing interest from non-mobile operators; therefore, edge computing faces a variety of access network technologies, including wired (fixed) and wireless (mobile) access. In this paper, we propose an integrated heterogeneous networking scheme for multi-access edge computing and fiber-wireless access networks that uses network virtualization to achieve the dynamic orchestration of the network, storage, and computing resources to meet diverse application demands. The global view and centralized control of the entire network and the unified scheduling of the resources in the scheme anticipate the convergence of various types of access networks and the edge cloud. The multipath transmission of the service flows is further combined as an instance of integrated edge cloud networking. An experimental testbed is established in the laboratory, and the performance of the multi-access edge computing and networking is evaluated to verify the feasibility and effectiveness of the scheme. The results demonstrate that the scheme can effectively improve the network performance.
- Research Article
182
- 10.1109/tii.2020.3040180
- Nov 25, 2020
- IEEE Transactions on Industrial Informatics
With the potential of implementing computing-intensive applications, edge computing is combined with digital twinning (DT)-empowered Internet of vehicles (IoV) to enhance intelligent transportation capabilities. By updating digital twins of vehicles and offloading services to edge computing devices (ECDs), the insufficiency in vehicles’ computational resources can be complemented. However, owing to the computational intensity of DT-empowered IoV, ECD would overload under excessive service requests, which deteriorates the quality of service (QoS). To address this problem, in this article, a multiuser offloading system is analyzed, where the QoS is reflected through the response time of services. Then, a service offloading (SOL) method with deep reinforcement learning, is proposed for DT-empowered IoV in edge computing. To obtain optimized offloading decisions, SOL leverages deep Q-network (DQN), which combines the value function approximation of deep learning and reinforcement learning. Eventually, experiments with comparative methods indicate that SOL is effective and adaptable in diverse environments.
- Research Article
- 10.1155/int/8064086
- Jan 1, 2025
- International Journal of Intelligent Systems
With advancements of in‐vehicle computing and Multi‐access Edge Computing (MEC), the Internet of Vehicles (IoV) is increasingly capable of supporting Vehicle‐oriented Edge Intelligence (VEI) applications, such as autonomous driving and Intelligent Transportation Systems (ITSs). However, IoV systems that rely solely on vehicular sensors often encounter limitations in forecasting events beyond current roadways, which are critical for regional transportation management. Moreover, the inherent temporal dependency in VEI application data poses risks of interruptions, impeding the seamless tracking of incremental information. To address these challenges, this paper introduces a joint task offloading and resource allocation strategy within an MEC environment that collaboratively integrates vehicles and Roadside Infrastructure Sensors (RISs). The strategy carefully considers the Doppler shift from vehicle mobility and the Tolerance for Interruptions of Incremental Information (T3I) in VEI applications. We establish a decision‐making framework that actively balances delay, energy consumption, and the T3I metric by formulating the task offloading as a stochastic network optimization problem. Utilizing Lyapunov optimization, we dissect this complex problem into three targeted subproblems that include optimizing local computational capacity, MEC computational capacity and comprehensive offloading decisions. To tackle the efficient offloading, we develop algorithms that separately optimize offloading scheduling, channel allocation and transmission power control. Notably, we incorporate a Potential Minimum Point (PMP) algorithm to boost parallel processing and simplify computational scale through matrix decomposition. Evaluations of our algorithm show that it excels in both complexity and accuracy, with accuracy improvements ranging from 74.3% to 114.0% in asymmetric resource environments. Simulation and experimental studies on offloading performance validate the effectiveness of our framework, which significantly balances network performance, reduces latency, and improves system stability.
- Conference Article
1
- 10.1109/aisc56616.2023.10085158
- Jan 27, 2023
Multi-access edge computing (MEC) and Ultra-dense networks (UDN) are a special case of 5G cellular networks where the density of base stations is higher compared to that of the end users (UE). Hence, a UE is likely to be present in the coverage of multiple base stations at any given time instant. This paper deals with providing scheduling algorithm for multiaccess edge computing in UDN. Unlike existing works, where the transmission scheduling (i.e., assigning the base stations for each client) and the computation resource scheduling are jointly considered. Due to the uncertainties in the task generation and path losses, we model the scheduling problem as deep reinforcement learning (DRL) problem which maximizes the total utility of the clients. The DRL model (based on actor-critic neural networks) is trained using the deep deterministic policy gradient (DDPG) algorithm. The results show the convergence of the total utility and better performance compared to a greedy policy and a priority based scheduling policy.
- Research Article
- 10.52783/jisem.v10i55s.11616
- May 30, 2025
- Journal of Information Systems Engineering and Management
Resource-efficient and low-latency applications are made possible by the revolutionary paradigm known as Multi-Access Edge Computing (MEC), which places computational resources closer to the end users. Dynamic resource allocation and task offloading are essential for guaranteeing optimal performance since MEC systems enable a wide range of computationally demanding and time-sensitive applications. However, a number of obstacles, including network congestion, a lack of processing capacity, and fluctuating user demands, make it difficult to manage these resources efficiently in real-time. In this regard, the intricate trade-offs between latency, energy efficiency, computational resources, and quality of service (QoS) can be effectively addressed by multi-objective optimization (MOO) and deep reinforcement learning (DRL). In order to optimize resource allocation and work offloading, this research investigates the use of MOO and DRL in MEC systems. In particular, we suggest a hybrid framework that uses DRL for adaptive, real-time decision-making and multi-objective optimization to balance conflicting objectives. The study offers a thorough model, simulations, and findings that show how well our strategy works to enhance system performance in a variety of scenarios. This study makes two contributions: first, it presents a new method for dynamic resource management and job offloading in MEC systems; second, it shows that combining MOO and DRL in practical applications is feasible and has potential advantages. Improved system performance, energy efficiency, and user happiness are anticipated results, which would represent a major step toward the creation of effective, scalable MEC settings.
- Conference Article
- 10.1109/ictc55196.2022.9952477
- Oct 19, 2022
Edge computing has emerged as a hot topic in recent years, coinciding with the advancement of 5G implementation. Utilizing multi-access edge computing (MEC), the transmission of mobile data will occur in real time and ultra low latency. Edge computing is required to deliver 5G service. MEC is a cloud computing evolution that moves applications from centralized data centers to the network edge, bringing technology resources closer to end users and their devices. Mitratel, the largest tower provider in one of the most attractive market in the world - Indonesia. It provides nationwide coverage in highly attractive across the country. Moreover, it has aspiration to become the leading operator of mission critical infrastructure to enhance Indonesia's digital future. Now it is expanding to provide full suite of digital infrastructure solution. In the tower ecosystem, Mitratel also offers various support services, such as edge infra solution, tower fiberization, small cell, power-to-tower, and other technologies to help accelerating 5G development and provide access and convenience to cellular service providers throughout Indonesia. Telkomsel is the largest mobile operator in Indonesia with varieties of mobile services and more than 170 million subscribers. Both of them are subsidiaries of Telkom, an Indonesian multinational telecommunication holding group. Mitratel, Telkomsel, and Telkom works together to conduct proof of concept (PoC) of MEC on tower ecosystem Indonesia. Mitratel utilize its BTS Room which functions as a micro edge data center to provide edge infra solution which consist of conditioned room, power, connectivity, Network Monitoring System (NMS) and others supported infrastructures. While, Telkomsel brings mobile virtual networks, edge servers and use cases. Mitratel, Telkomsel, and Telkom have successfully completed the MEC PoC at the Mitratel BTS Room in the GBK (Gelora Bung Karno), Jakarta. This PoC shows that tower ecosystem of Mitratel is reliable to deliver MEC service proven by the result of URRLC measurement. In comparison, with MEC and without MEC, the URLLC rates are 5 ms and 15 ms, respectively. In addition, use cases such as Trash Detection, Mask Detection and Virtual Reality are successfully implemented. From the Comparison between micro edge data center of Mitratel and the American Tower, several differences were identified. This will be subject of the further development of this PoC.
- Research Article
12
- 10.1109/tpds.2023.3287633
- Aug 1, 2023
- IEEE Transactions on Parallel and Distributed Systems
Multi-access edge computing (MEC) and network function virtualization (NFV) are promising technologies to support emerging IoT applications, especially those computation-intensive. In NFV-enabled MEC environment, service function chain (SFC), i.e., a set of ordered virtual network functions (VNFs), can be mapped on MEC servers. Mobile devices (MDs) can offload computation-intensive applications, which can be represented by SFCs, fully or partially to MEC servers for remote execution. This paper studies the partial offloading and SFC mapping joint optimization (POSMJO) problem in an NFV-enabled MEC system, where the data from an incoming task is partitioned into two parts, with one part executed locally and the other offloaded to the edge infrastructure for execution. These two parts are independent of each other, but both need to be processed by the same SFC. The objective is to minimize the average cost in the long term which is a combination of execution delay, MD's energy consumption, and usage charge for edge computing. This problem consists of two closely related decision-making steps, namely task partition and VNF placement, which is highly complex and quite challenging. To address this, we propose a cooperative dual-agent deep reinforcement learning (CDADRL) algorithm, where two agents interact with each other. Simulation results show that the proposed algorithm outperforms three combinations of deep reinforcement learning algorithms with respect to cumulative reward and it overweighs a number of baseline algorithms in terms of execution delay, energy consumption, and usage charge.
- Conference Article
5
- 10.1109/icufn49451.2021.9528673
- Aug 17, 2021
Vehicular Edge Computing (VEC) is a new leading technology to enhance the vehicular performance through task offloading where resource-confined vehicles offload their computing task to the vehicular multi-access edge computing (MEC) networks in proximity. However, the environment of vehicular task offloading is extremely dynamic and faces some challenges to determine the location of processing the offloaded task. As a result, to achieve optimal performance by using traditional VEC system is difficult because in advance we don't know the demand of vehicles. Therefore, a non-cooperative game theory-based efficient task offloading (NGTO) scheme is proposed in this study where the offloading decisions are taken either the MEC server or remote cloud server through the game-theoretic approach. To reduce the processing latency of the vehicles' computation tasks and assure the maximum utility of each vehicle, we used a distributed best response offloading strategy. Our proposed strategy accommodates its offloading probability to achieve a unique equilibrium under certain conditions. Detailed performance evaluation affirms that our proposed NGTO scheme can outperform in all scenarios. It can minimize the response time at almost 41.2 % and average task failure rate at approximately 56.3% when compared with a local roadside unit computing (LRC) scheme. The reduced response time and task failure rates are approximately 25.2% and 20.4%, respectively, when compared with a collaborative (LRC with cloud via roadside unit) offloading scheme.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.