Abstract

The ubiquity of artificial intelligence (AI) has led to its extensive research and application in various fields, such as computer vision, natural language processing, and medical image analysis. However, responsible AI faces severe security challenges, including the leakage of pretrained models and valuable training data. The existing solutions adopt new algorithm designs (such as federated learning) or cryptography (such as homomorphic encryption) to prevent possible security vulnerabilities. We observe that hardware-assisted trusted execution environments (TEEs) can further improve machine learning responsibility. Intel Software Guard Extension (SGX) is a popular, trusted execution hardware that enables users’ programs to run in an untrusted execution environment, such as a malicious operating system, but ensures the confidentiality and integrity of data. Therefore, we have designed, a hardware-assisted secure machine learning training framework that protects data security during the training process. We have analyzed the typical characteristics of machine learning applications and characterized through extensive experiments. Our findings demonstrate that introducing security guarantees causes performance degradation, which provides a feasible optimization direction in the near future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call