Abstract

Conventional mobile computation offloading relies on offline prefetching that fetches user-specific data to the cloud prior to computing. For computing depending on real-time inputs, the offline operation can result in fetching large volumes of redundant data over wireless channels and unnecessarily consumes mobile-transmission energy. To address this issue, we propose the novel technique of online prefetching for a large-scale program with numerous tasks, which seamlessly integrates task-level computation prediction and real-time prefetching within the program runtime. The technique not only reduces mobile-energy consumption by avoiding excessive fetching but also shortens the program runtime by parallel fetching and computing enabled by prediction. By modeling the sequential task transition in an offloaded program as a Markov chain, stochastic optimization is applied to design the online-fetching policies to minimize mobile-energy consumption for transmitting fetched data over fading channels under a deadline constraint. The optimal policies for slow and fast fading are shown to have a similar threshold-based structure that selects candidates for the next task by applying a threshold on their likelihoods and furthermore uses them controlling the corresponding sizes of prefetched data. In addition, computation prediction for online prefetching is shown theoretically to always achieve energy reduction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call