Abstract

The advancements of mobile devices and the rapid development of computing applications are encouraging a trend shift from traditional, centralized mobile cloud computing to decentralized mobile edge computing (e.g., cloudlet, fog, and multi-access edge computing) focused on the quality of service (QoS) of applications and quality of experience of user equipment (UEs). Such edge technologies enable service and resource provisioning in the vicinity of UEs, drastically reducing the network propagation delay and backhaul load. However, the emergence of computation-intensive functions, latency-sensitive tasks, and bandwidth-hungry activities make QoS-aware scheduling and resource allocation in the edge environment an open research issue. In this article, we discuss the edge-based service pro-visioning issues from the perspective of artificial intelligence (AI). We first summize the three main edge-enabling technologies. The core functioning and research challenges are presented in the context of a layered edge environment. A comparison is made among these technologies with the presentation of similarities and differences. The edge processing model is further elaborated for centralized architecture and distributed architecture, respectively. Then, we identify and categorize AI-based technologies for the UE layer, edge platform layer, and inter-edge layer regarding core edge-based service provisioning issues. Finally, we highlight open challenges and research directions, and we conclude that the number of potential issues in the edge can be properly solved by AI-based technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call