Abstract
Vehicular fog computing (VFC) pushes the cloud computing capability to the distributed fog nodes at the edge of the Internet, enabling compute-intensive and latency-sensitive computing services for vehicles through task offloading. However, a heterogeneous mobility environment introduces uncertainties in terms of resource supply and demand, which are inevitable bottlenecks for the optimal offloading decision. Also, these uncertainties bring extra challenges to task offloading under the oblivious adversary attack and data privacy risks. In this article, we develop a new adversarial online learning algorithm with bandit feedback based on the adversarial multi-armed bandit theory, to enable scalable and low-complexity offloading decision making. Specifically, we focus on optimizing fog node selection with the aim of minimizing the offloading service costs in terms of delay and energy. The key is to implicitly tune the exploration bonus in the selection process and the assessment rules of the designed algorithm, taking into account volatile resource supply and demand. We theoretically prove that the input-size dependent selection rule allows to choose a suitable fog node without exploring the sub-optimal actions, and also an appropriate score patching rule allows to quickly adapt to evolving circumstances, which reduce variance and bias simultaneously, thereby achieving a better exploitation-exploration balance. Simulation results verify the effectiveness and robustness of the proposed algorithm.
Highlights
Increasing demand for high-complexity but low-latency computation, triggered by emerging applications, e.g. autonomous driving, motivates the use of rising technologies, mobile edge/fog computing, that bring cloud-like computing services, closer to end-users [1]–[3]
To boost up additional but limited edge computing resources, vehicular fog computing (VFC) [4], [5] has emerged as a new computing paradigm where moving fog nodes with surplus resources and good connectivity, named vehicular fog nodes (VFNs), are utilized as viable components that serve to execute computation tasks offloaded from service clients
The performance results of learning algorithms in terms of the learning regret and the average per-bit cost when ξ = 1 in equation (1), per-bit latency, are depicted in Fig. 3, showing that the proposed algorithm outperforms other implicit explorationbased algorithms where an arm is selected based on the scores i) fully reset with zero values of βi and LTki[1], and ii) partially reset with zero value of βi
Summary
Increasing demand for high-complexity but low-latency computation, triggered by emerging applications, e.g. autonomous driving, motivates the use of rising technologies, mobile edge/fog computing, that bring cloud-like computing services, closer to end-users [1]–[3]. A decision-making problem has been formulated in [7] as a stochastic control process, e.g., semi-Markov, to minimize the offloading service cost in terms of delay and energy. The trade-off between the delay and energy cost is investigated in [8] based on matching theory. Such centralized decision-making might be challenging to run due to i) signaling overhead burden caused by gathering and processing a massive amount of information, e.g., requested tasks of service users, available resources of VFNs, and mobility of both, and ii) a privacy concern raised by exchanging such private information with a central controller
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.