Abstract

Protecting patients’ privacy is one of the most important tasks when developing medical artificial intelligence models since medical data is the most sensitive personal data. To overcome this privacy protection issue, diverse privacy-preserving methods have been proposed. We proposed a novel method for privacy-preserving Gated Recurrent Unit (GRU) inference model using privacy enhancing technologies including homomorphic encryption and secure two party computation. The proposed privacy-preserving GRU inference model validated on breast cancer recurrence prediction with 13,117 patients’ medical data. Our method gives reliable prediction result (0.893 accuracy) compared to the normal GRU model (0.895 accuracy). Unlike other previous works, the experiment on real breast cancer data yields almost identical results for privacy-preserving and conventional cases. We also implement our algorithm to shows the realistic end-to-end encrypted breast cancer recurrence prediction.

Highlights

  • There has been a rise in the security and privacy issues in many industrial fields, especially for medical applications, since medical data is considered to be personal and sensitive

  • There are several types of recurrent neural network (RNN) units, such as a basic RNN unit, a well-known Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) [11], which we focus on in this study

  • Our goal is to design a secure protocol between the server and the client that leads to the client obtaining the result of GRU model evaluation of its input, while no information is obtained by the the server regarding the input; the client does not learn any more information about the GRU model than the information derived from the inference result

Read more

Summary

Introduction

There has been a rise in the security and privacy issues in many industrial fields, especially for medical applications, since medical data is considered to be personal and sensitive. Thanks to rapid developments on deep learning field, it becomes possible to distil novel knowledge and build explanatory models from enormous amount of unrefined personal data. Those knowledge are considered as a sort of sensitive data, it is highly restrictive to utilize them in real world. Huge amount of medical data accumulation provides new insights and novel methodology aided by deep learning, but the practical use of such academic progress is quite limited due to security and privacy issue at the same time.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.