Abstract

Neural network systems are highly capable of deriving knowledge from complex data, and they are used to extract patterns and trends which are otherwise hidden in many applications. Preserving the privacy of sensitive data and individuals' information is a major challenge in many of these applications. One of the most popular algorithms in neural network learning systems is the back-propagation (BP) algorithm, which is designed for single-layer and multi-layer models and can be applied to continuous data and differentiable activation functions. Another recently introduced learning technique is the extreme learning machine (ELM) algorithm. Although it works only on single-layer models, ELM can out-perform the BP algorithm by reducing the communication required between parties in the learning phase. In this paper, we present new privacy-preserving protocols for both the BP and ELM algorithms when data is horizontally and vertically partitioned among several parties. These new protocols, which preserve the privacy of both the input data and the constructed learning model, can be applied to online incoming records and/or batch learning. Furthermore, the final model is securely shared among all parties, who can use it jointly to predict the corresponding output for their target data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call