Abstract

Vehicular edge computing (VEC), being a novel computing paradigm, promises to provide divergent vehicular edge services, both functional (e.g., charging route prediction, emergency messages, etc.) and infotainment (e.g. video gaming applications, featured movie series, etc.), at the network edge while satisfying application-specific QoS requirements. Vehicles usually send these service requests to nearest roadside units (RSUs), which contain mobile edge servers, according to the functional requirements or the vehicle owner preferences. However, the VEC server's virtual resources may fall short compared to the unbounded amount of real-time service requests (infotainment/functional) during rush hours. This limitation entails VEC servers to fail to meet the stringent latency requirements which may create unwanted malfunction event during driving in the requested vehicles (if functional/critical service requests are delayed in processing). Moreover, the VEC environment's intrinsic properties, i.e. mobility, application-specific distinct latency requirements, traffic congestion, and uncertain task arrival rate, make the VEC task scheduling problem a non-trivial one. In this paper, we propose an extreme reinforcement learning (ERL) based context-aware VEC task scheduler that can make online adaptive scheduling decisions to meet the application-specific latency requirements for both types of tasks (i.e. functional and infotainment). The scheduler can make scheduling decisions directly from its experience without prior knowledge or the VEC environment model. Finally, we present extensive simulation results to confirm the efficacy of the proposed scheduler. Results show that the VEC server can achieve successful (by meeting QoS requirements) task completion rate of above 96% for different task arrival rates (ranging from 10 to 50 arrival/s) using the proposed scheduler. In the simulation, we also analyze the scheduling algorithm's scalability in response to the vertical expansion of the VEC server. Furthermore, we compare the performance of our proposed method with two baseline methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call