Due to the expansion of Artificial Intelligence (AI), especially Machine Learning (ML), it is more common to face confidentiality regulations about using sensitive data in learning models generally hosted in cloud environments. Confidentiality regulations such as HIPAA and GDPR seek to guarantee the confidentiality and privacy of personal information. Input and output data of a learning model may include sensitive data that must be protected. Adversaries could intercept and exploit this data to infer more sensitive data or even to determine the structure of the prediction model. To guarantee data privacy, one option could be encrypting data and making inferences over encrypted data. This strategy would be challenging for learning models that now must receive encrypted data, make inferences over encrypted data, and deliver encrypted data. To address this issue, this paper presents a privacy-preserving machine learning approach using Fully Homomorphic Encryption (FHE) for a model that predicts risk levels of suffering a traffic accident. Despite the limitations of experimenting with FHE on machine learning models using a low-performance computer, limitations that are undoubtedly overcome by using high-performance computational infrastructure, we built some encrypted models. Among the encrypted models based on Decision Trees, Random Forests, XGBoost, and Fully Connected Neural Networks (FCNN), the model based on FCNN reached the highest accuracy (80.1%) for the lowest inference time (8.476 s).
Read full abstract