Abstract

In vehicular edge computing (VEC) systems, vehicles can contribute their computing resources to the network, and help other vehicles or pedestrians to process their computation tasks. However, the high mobility of vehicles leads to a dynamic and uncertain vehicular environment, where the network topologies, channel states and computing workloads vary fast across time. Therefore, it is challenging to design task offloading algorithms to optimize the delay performance of tasks. In this chapter, we consider the task offloading among vehicles, and design learning-based task offloading algorithms based on the multi-armed bandit (MAB) theory, which enable vehicles to learn the delay performance of their surrounding vehicles while offloading tasks. We start from the single offloading case where each task is offloaded to one vehicle to be processed, and propose an adaptive learning-based task offloading (ALTO) algorithm, by jointly considering the variations of surrounding vehicles and the input data size. To further improve the reliability of the computing services, we introduce the task replication technique, where the replicas of each task is offloaded to multiple vehicles and processed by them simultaneously, and propose a learning-based task replication algorithm (LTRA) based on combinatorial MAB. We prove that the proposed ALTO and LTRA algorithms have bounded learning regret, compared with the genie-aided optimal solution. And we also build a system level simulation platform to evaluate the proposed algorithms in the realistic vehicular environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call