Abstract

Edge intelligence has emerged as a prevalent enabling technology to support various intelligent applications. Along with the prosperity, it also raises great concern on the security and privacy since the edge servers are usually shared and untrusted. The security-sensitive code (i.e., the pre-trained model) and data may be easily stolen by malicious tenants, and even untrusted infrastructure providers. To this end, Software Guard Extensions (SGX) is proposed to provide an isolated Trust Execution Environment (TEE) for security and privacy guarantee. However, we find that running tasks in SGX suffer certain performance degradation due to the limited Enclave Page Cache (EPC) size. This further leads to frequent page swapping operations and the high enclave call overhead, which are also influenced by the task (i.e., DNN layer) dispatching and scheduling. To this end, in this paper, we design <b>Lasagna</b> , as an SGX based secure DNN inference acceleration framework, which explores the layered-structure of DNN models to well balance the usage of the scarce EPC resources and the computation resources. Lasagna mainly consists of a global task balancer and a local task scheduler, responding for task dispatching across distributed edge servers and task scheduling in local server, respectively. We evaluate Lasagna over different well-known DNN models, and the results show that Lasagna effectively speeds up the inference performance by <inline-formula><tex-math notation="LaTeX">$1.11\times -1.51\times$</tex-math></inline-formula> .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call