Abstract

Machine learning inference services have emerged recently: service providers encapsulate trained machine learning models as an interface and provide it as a service. Anyone can submit their own data and get inferred results. The popularity of machine learning inference services greatly reduced the threshold for machine learning, but in the current system, clients need to submit data in clear text, sacrificing their own privacy. In machine learning ecosystem, decision tree models occupy half of the world. Therefore, how to design an efficient decision tree inference service system with privacy protection characteristics has become a research focus. In this paper, we exploit the process of decision tree evaluation and divides the entire system design into four basic modules: an attribute selection module, a comparison operation module, a decision index vector generation module, and a decision result evaluation module. We design customized and efficient secure two-party computation protocols based on secret sharing. Compared with the straightforward generic solution, the performance has been greatly improved. Our scheme does not need expensive public key cryptography primitives, therefore greatly reducing computation and communication overhead, and enabling the scheme to run on lightweight devices such as mobile phones. We perform the experiments by simulating the real-world network environment to prove the practicability of the scheme.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.