Abstract

Cyber foraging has been shown to be especially effective for augmenting low-power Internet-of-Thing (IoT) devices by offloading video processing tasks to nearby edge/cloud computing servers. Factors such as dynamic network conditions, concurrent user access, and limited resource availability, cause offloading decisions that negatively impact overall processing throughput and end-user delays. Moreover, edge/cloud platforms currently offer both Virtual Machine (VM) and serverless computing pricing models, but many existing edge offloading approaches only investigate single VM-based offloading performance. In this paper, we propose a predictive (NP-complete) scheduling-based offloading framework and a heuristic-based counterpart that use machine learning to dynamically decide what combinations of functions or single VM needs to be deployed so that tasks can be efficiently scheduled. We collected over 10,000 network and device traces in a series of realistic experiments relating to a protest crowds incident management application. We then evaluated the practicality of our predictive cyber foraging approach using trace-driven simulations for up to 1000 devices. Our results indicate that predicting single VM offloading costs: (a) leads to near-optimal scheduling in 70% of the cases for service function chaining, and (b) offers a 40% gain in performance over traditional baseline estimation techniques that rely on simple statistics for estimations in the case of single VM-offloading. Considering a series of visual computing offloading scenarios, we also validate our approach benefits of using online versus offline machine learning models for predicting offloading delays.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call