Abstract

Edge Computing has emerged as a new computing paradigm dedicated for mobile performance enhancement and energy efficiency purposes. Specifically, it benefits today’s interactive applications on power-constrained devices by offloading compute-intensive tasks to the edge nodes in close proximity. Meanwhile, FPGA is well known for its excellence in accelerating (domain-specific) compute-intensive tasks such as deep learning algorithms in a high performance and energy-efficient manner due to its hardware-customizable nature. In this paper, we make the first attempt to leverage and combine the advantages of these two, and proposed a new network-assisted computing model, namely FPGA-based edge computing. As a case study, we choose three computer vision (CV)-based mobile interactive applications, and implement their back-end computation engines on FPGA. By deploying such application-customized accelerator modules for computation offloading at the network edge, we experimentally demonstrate that this approach can effectively reduce response time for the applications and energy consumption for the entire system in comparison with traditional CPU-based edge/cloud offloading approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call