Abstract

Machine learning algorithms crucially depend on non-linear mathematical functions such as division (for normalization), exponentiation (for softmax and sigmoid), tanh (as an activation function), logarithm (for crossentropy loss), and square root (for back-propagation of normalization layers). However, when machine learning is performed over secure computation, these protocols incur a large communication overhead and high round complexity. In this work, we propose new multi-party computation (MPC) protocols for such functions. Our protocols achieve constant round complexity (3 for semi-honest, 4 for malicious), an order of magnitude lower communication (54 − 121× lower than prior art), and high concrete efficiency (2−1163× faster runtime). We rely on recent advances in function secret sharing (FSS) to construct these protocols. Our contributions can be summarized as follows: (1) A constant round protocol to securely evaluate nonlinear functions such as division, exponentiation, logarithm, and tanh (in comparison to prior art which uses round complexity proportional to the rounds of iterative methods/required precision) with high accuracy. This construction largely follows prior work in look-up style secure computation. (2) Our main contribution is the extension of the above protocol to be secure in the presence of malicious adversaries in the honest majority setting. We provide a malicious sketching protocol for FSS schemes that works over rings and in order to prove its security, we extend (and prove) a corresponding form of SchwartzZippel lemma over rings. This is the first such extension of the lemma and it can be of independent interest in other domains of secure computation. (3) We implement our protocol and showcase order of magnitude improvements in runtime and communication. Given the low round complexity and substantially lower communication, our protocols achieve even better performance over network constrained environments such as WAN. Finally, we showcase how such functions can lead to scalability in machine learning. Note that techniques presented are applicable beyond the application of machine learning as the protocols effectively present an efficient 1-out-of-N oblivious transfer or an efficient private information retrieval protocol.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.