Abstract

Mobile computation offloading refers to techniques for offloading computation intensive tasks from mobile devices to the cloud so as to lengthen the formers’ battery lives and enrich their features. The conventional designs fetch (transfer) user-specific data from mobiles to the cloud prior to computing, called offline prefetching . However, this approach can potentially result in excessive fetching of large volumes of data and cause heavy loads on radio-access networks. To solve this problem, the novel technique of live prefetching , which seamlessly integrates the task-level computation prediction and prefetching within the cloud-computing process of a large program with numerous tasks, is proposed in this paper. The technique avoids excessive fetching but retains the feature of leveraging prediction to reduce the program runtime and mobile transmission energy. By modeling the tasks in an offloaded program as a stochastic sequence, stochastic optimization is applied to design fetching policies to minimize mobile energy consumption under a deadline constraint. The policies enable real-time control of the prefetched-data sizes of candidates for future tasks. For slow fading, the optimal policy is derived and shown to have a threshold-based structure, selecting candidate tasks for prefetching and controlling their prefetched data based on their likelihoods. The result is extended to design close-to-optimal prefetching policies to fast fading channels. Compared with fetching without prediction, live prefetching is shown theoretically to always achieve reduction on mobile energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call