In a traditional big data network, data streams generated by User Equipments (UEs) are uploaded to the remote cloud (for further processing) via the Internet. However, moving a huge amount of data via the Internet may lead to a long End-to-End (E2E) delay between a UE and its computing resources (in the remote cloud) as well as severe traffic jams in the Internet. To overcome this drawback, we propose a cloudlet network to bring the computing and storage resources from the cloud to the mobile edge. Each base station is attached to one cloudlet and each UE is associated with its Avatar in the cloudlet to process its data locally. Thus, the E2E delay between a UE and its computing resources in its Avatars is reduced as compared to that in the traditional big data network. However, in order to maintain the low E2E delay when UEs roam away, it is necessary to hand off Avatars accordingly-it is not practical to hand off the Avatars' virtual disks during roaming as this will incur unbearable migration time and network congestion. We propose the LatEncy Aware Replica placemeNt (LEARN) algorithm to place a number of replicas of each Avatar's virtual disk into suitable cloudlets. Thus, the Avatar can be handed off among its cloudlets (which contain one of its replicas) without migrating its virtual disk. Simulations demonstrate that LEARN reduces the average E2E delay. Meanwhile, by considering the capacity limitation of each cloudlet, we propose the LatEncy aware Avatar hanDoff (LEAD) algorithm to place UEs' Avatars among the cloudlets such that the average E2E delay is minimized. Simulations demonstrate that LEAD maintains the low average E2E delay.
Read full abstract