Abstract

BackgroundIn biomedical research, data sharing and information exchange are very important for improving quality of care, accelerating discovery, and promoting the meaningful secondary use of clinical data. A big concern in biomedical data sharing is the protection of patient privacy because inappropriate information leakage can put patient privacy at risk.MethodsIn this study, we deployed a grid logistic regression framework based on Secure Multi-party Computation (SMAC-GLORE). Unlike our previous work in GLORE, SMAC-GLORE protects not only patient-level data, but also all the intermediary information exchanged during the model-learning phase.ResultsThe experimental results demonstrate the feasibility of secure distributed logistic regression across multiple institutions without sharing patient-level data.ConclusionsIn this study, we developed a circuit-based SMAC-GLORE framework. The proposed framework provides a practical solution for secure distributed logistic regression model learning.

Highlights

  • In biomedical research, data sharing and information exchange are very important for improving quality of care, accelerating discovery, and promoting the meaningful secondary use of clinical data

  • Patient information could leak in these solutions due to disclosure of the information matrix and score vectors during iterative model learning [25, 26]. To protect these exchanged data, many secure multi-party computation (SMC) methods [18, 27,28,29,30,31,32,33] have been developed for distributed model learning

  • We propose a secret-sharing circuit-based secure multi-party computation framework for grid logistic regression (SMAC-GLORE)

Read more

Summary

Methods

To securely evaluate the logistic function, we introduced secret-sharing circuits-based Secure Multi-party Computation (SMC) into procedure of calculation. The GMW protocol uses secret-sharing rather than garbled truth tables to implement the secure computation, which enables the computation among more than 2 parties. We update the first derivative of the maximum likelihood function with the current β (A1: line 5) In this procedure, reciprocals or divisions are required and we use the same procedure as for matrix inversion. Equation (13) shows that we can allow each party to separately compute its own part of first derivative based on local data and we add these results [11] Such approach will leak the information of β(t) at each iteration. The only outputs in plaintext are the learned model parameter β in the proposed SMAC-GLORE

Background
Results
Limitations and discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call