Abstract

As a branch of artificial intelligence science, machine learning (ML) uses big data in its training process, but data owners usually refuse to give their data away due to data rights and potential risks in data privacy. Instead of acquiring data from owners, transferring the ML model to data providers might be a safer way to access data, while avoiding data leakage. Two concerns about this safer way of access are data theft by the ML model and the theft of the ML model in the provider environment. Our newly designed trust execution framework based on Intel Software Guard Extensions (SGX) builds a secure communication mechanism between data users and providers and a trusted execution environment based on SGX hardware isolation, which protects the two-way security of both data and the ML model. The framework also realizes the porting of the python interpreter in SGX, which simplifies the programming of ML algorithms in SGX.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call