Abstract

Outsourcing Machine Learning (ML) tasks to cloud servers is a cost-effective solution when dealing with distributed data. However, outsourcing these tasks to cloud servers could lead to data breaches. Secure computing methods, such as Homomorphic Encryption (HE) and Trusted Execution Environments (TEE), have been used to protect outsourced data. Nevertheless, HE remains inefficient in processing complicated functions (e.g., non-linear functions) and TEE (e.g., Intel SGX) is not ideal for directly processing ML tasks due to side-channel attacks and parallel-unfriendly computation.In this paper, we propose a hybrid framework integrating SGX and HE, called HT2ML, to protect user's data and models. In HT2ML, HE-friendly functions are protected with HE and performed outside the enclave, while the remaining operations are performed inside the enclave obliviously. HT2ML leverages optimised HE matrix multiplications to accelerate HE computations outside the enclave while using oblivious blocks inside the enclave to prevent access-pattern-based attacks. We evaluate HT2ML using Linear Regression (LR) training and Convolutional Neural Network (CNN) inference as two instantiations. The performance results show that HT2ML is up to ∼11× faster than HE only baseline with 6-dimensional data in LR training. For CNN inference, HT2ML is ∼196× faster than the most recent approach (Xiao et al., ICDCS'21).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call