Abstract

As distributed computing evolves, edge computing has become increasingly important. It decentralizes resources like computation, storage, and bandwidth, making them more accessible to users, particularly in dynamic Telematics environments. However, these environments are marked by high levels of dynamic uncertainty due to frequent changes in vehicle location, network status, and edge server workload. This complexity poses substantial challenges in rapidly and accurately handling computation offloading, resource allocation, and delivering low-latency services in such a variable environment. To address these challenges, this paper introduces a “Cloud–Edge–End” collaborative model for Telematics edge computing. Building upon this model, we develop a novel distributed service offloading method, LSTM Muti-Agent Deep Reinforcement Learning (L-MADRL), which integrates deep learning with deep reinforcement learning. This method includes a predictive model capable of forecasting the future demands on intelligent vehicles and edge servers. Furthermore, we conceptualize the computational offloading problem as a Markov decision process and employ the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) approach for autonomous, distributed offloading decision-making. Our empirical results demonstrate that the L-MADRL algorithm substantially reduces service latency and energy consumption by 5–20%, compared to existing algorithms, while also maintaining a balanced load across edge servers in diverse Telematics edge computing scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call