Abstract

The accuracy of neural network can be improved by training over multi-participants' pooled dataset, but privacy problem of sharing sensitive data obstructs this collaborative learning. To solve this contradiction, we propose TransNet, a novel solution for privacy-preserving collaborative neural network, whose main idea is to add a transformed layer to the neural network. It has the advantage of lower computation and communication complexity than previous secure multi-party computation based and homomorphic encryption based schemes, and has the superiority of supporting arbitrarily partitioned dataset compared to previous differential privacy based and stochastic gradient descent based schemes, which support horizontally partitioned dataset only. TransNet is trained by a server which pools the transformed data, but has no special security requirement on the training server. We evaluate TransNet's performance over four datasets using different neural network algorithms. Experimental results demonstrate that TransNet is not affected by the number of participants, and trains as quickly as the original neural network does. With proper variables, TransNet gets close accuracy to the baseline which trains over pooled original dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call