Abstract

In recent years, both the scientific community and the industry have focused on moving computational resources with remote data centres from the centralized cloud to decentralised computing, making them closer to the source or the so called “edge” of the network. This is due to the fact that the cloud system alone cannot sufficiently support the huge demands of future networks with the massive growth of new, time-critical applications such as self-driving vehicles, Augmented Reality/Virtual Reality techniques, advanced robotics and critical remote control of smart Internet-of-Things applications. While decentralised edge computing will form the backbone of future heterogeneous networks, it still remains at its infancy stage. Currently, there is no comprehensive platform. In this article, we propose a novel decentralised edge architecture, a solution called OMNIBUS, which enables a continuous distribution of computational capacity for end-devices in different localities by exploiting moving vehicles as storage and computation resources. Scalability and adaptability are the main features that differentiate the proposed solution from existing edge computing models. The proposed solution has the potential to scale infinitely, which will lead to a significant increase in network speed. The OMNIBUS solution rests on developing two predictive models: (i) to learn timing and direction of vehicular movements to ascertain computational capacity for a given locale, and (ii) to introduce a theoretical framework for sequential to parallel conversion in learning, optimisation and caching under contingent circumstances due to vehicles in motion.

Highlights

  • In 2025, there will be more than 75 billion connected devices around the world, as predicted by Cisco [1]

  • Discussions on computational operations have increasingly shifted from centralised cloud, with remote data centres, to decentralised computing that is closer to the source or the so called ‘‘edge’’ of the network

  • The techniques we developed in this regard leverage known models such as replicator dynamics, mirror descent, stochastic gradient descent and the hedge algorithm

Read more

Summary

INTRODUCTION

In 2025, there will be more than 75 billion connected devices around the world, as predicted by Cisco [1]. The back and forth transfer of data between the cloud and individual devices increases latency Numerous new applications, such as self-driving vehicles, remote surgery, AR/VR, 8K video, advanced robotics in manufacturing and drone surveillance communication, require real-time and ultra-low latency performance [9], [11], [12]. Most promising candidates for future distributed data centers on the edge of the network for two primary reasons: (1) most road vehicles display predictable movement patterns, and (2) hardware capabilities for storage and computing in cars are expected to tremendously advance in coming years. Clusters of cars form a powerful local hub in individual areas They are capable of offering high ad-hoc computational and virtualised resources for end-devices.

AND RELATED WORK
Findings
PROOF OF THE CONCEPT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.