Abstract

With the development of global urbanization, the Internet of Things (IoT) and smart cities are becoming hot research topics. As an emerging model, edge computing can play an important role in smart cities because of its low latency and good performance. IoT devices can reduce time consumption with the help of a mobile edge computing (MEC) server. However, if too many IoT devices simultaneously choose to offload the computation tasks to the MEC server via the limited wireless channel, it may lead to the channel congestion, thus increasing time overhead. Facing a large number of IoT devices in smart cities, the centralized resource allocation algorithm needs a lot of signaling exchange, resulting in low efficiency. To solve the problem, this paper studies the joint policy of communication and computing of IoT devices in edge computing through game theory, and proposes distributed Q-learning algorithms with two learning policies. Simulation results show that the algorithm can converge quickly with a balanced solution.

Highlights

  • With the increasing number of cities and urban population, there has been an increasing interest in the smart city

  • For the above problems and challenges, this paper focuses on how to perfectly do the resources allocated in the smart city, mainly aiming at making as many Internet of Things (IoT) devices associated with one mobile edge computing (MEC) server as possible accomplish the tasks in a low latency

  • Thousands of IoT devices simultaneously generate massive computing tasks to keep multiple services running in the city

Read more

Summary

Introduction

With the increasing number of cities and urban population, there has been an increasing interest in the smart city. If a large number of devices simultaneously do the computation offloading to the MEC server, due to the limited channel resources, the wireless network may become congested, increasing the time consumption. For the above problems and challenges, this paper focuses on how to perfectly do the resources (including wireless spectrum resources and computing resources) allocated in the smart city, mainly aiming at making as many IoT devices associated with one MEC server as possible accomplish the tasks in a low latency. Combined with game theory and system model, we proposed two distributed Q-learning algorithms with different learning policies to get the joint communication and computation strategies for each IoT devices in the smart city.

Related Work
System Model
Communication Model
Computation Model
Local Computing
Edge Computing
Problem Description
Non-Cooperative Game Model for Computation Offloading
Game Formulation
Potential Game
Distributed Q-learning Algorithm for Computation Offloading
Stateless Q-Learning
Distributed Q-Learning with e-Greedy Learning Policy
Distributed Q-Learning with Boltzmann Learning Policy
Simulation Result
Findings
Conclusions and Future Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call