Abstract

We present a model for measuring the impact of offloading soft real-time jobs over multi-tier cloud infrastructures. The jobs originate in mobile devices and offloading strategies may choose to execute them locally, in neighbouring devices, in cloudlets or in infrastructure cloud servers. Within this specification, we put forward several such offloading strategies characterised by their differential use of the cloud tiers with the goal of optimizing execution time and/or energy consumption. We implement an instance of the model using Jay, a software framework for adaptive computation offloading in hybrid edge clouds. The framework is modular and allows the model and the offloading strategies to be seamlessly implemented while providing the tools to make informed runtime offloading decisions based on system feedback, namely through a built-in system profiler that gathers runtime information such as workload, energy consumption and available bandwidth for every participating device or server. The results show that offloading strategies sensitive to runtime conditions can effectively and dynamically adjust their offloading decisions to produce significant gains in terms of their target optimization functions, namely, execution time, energy consumption and fulfilment of job deadlines.

Highlights

  • The last decade witnessed an impressive evolution in the storage and processing capabilities of mobile devices

  • In that paper we evaluated only latency-aware offloading strategies in several cloud configurations, from mobile edge clouds formed by Android devices up to 3-tier hybrid clouds, i.e., including cloudlets and infrastructure cloud server instances

  • In this paper we presented a model for soft-real time job offloading over hybrid cloud topologies, along with offloading strategies that try to optimize execution time, total energy consumption and fulfill QoS requirements in the form of job deadlines

Read more

Summary

Introduction

The last decade witnessed an impressive evolution in the storage and processing capabilities of mobile devices. These microprocessors feature multiple GPU cores and so called neural cores optimized for machine learning applications such as deep-learning and have reached performance levels comparable to laptop and some desktop analogs [1]. Despite these advancements, some computational jobs are too demanding for mobile devices. Mobile cloud computing [2] has traditionally tackled this problem by offloading computation and data generated by mobile device applications to cloud infrastructures This move spares the battery in the devices and, in principle, speedsup computation as the high-availability, elastic, cloud. From an energy point of view, offloading jobs and/or data to cloud infrastructures is globally highly inefficient

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.