Abstract

The application of artificial intelligence models to raw multimedia data is susceptible to various data inference attacks, posing a significant risk in terms of sensitive input information leakage. Most of the existing studies on privacy-preserving multimedia applications based on artificial intelligence, focus on a single intelligent model and thus have various limitations. In this paper, we attempt to directly perturb the core component of many multimedia intelligent models in Bayesian networks and deep learning. That is, we apply conditional probability distribution estimation to guarantee the privacy of the models. At first, we present the formal problem formulation of private conditional probability distribution estimation and apply it to random forest for task classification in multimedia applications. Then, we design a simple perturbation approach called NaivePrivDistEst, to add noise to all the elements of probability estimation in random forest. Next, we present an improved approach called FastLRG, that utilizes the taxonomy tree to discretize the continuous attributes, thereby combining the attribute features to improve the prediction accuracy of random forest. Finally, we perform extensive experiments to evaluate the performance of random forest based on the proposed estimation algorithms. The experimental results indicate that the proposed models have better performance compared with existing private decision trees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call