Abstract

In vehicular differentially private federated learning, the personalized differential privacy mechanism allows vehicles to customize their local privacy budget. However, it leads to uneven utility across datasets. Therefore, model owners must consider the unevenness in utility within the incentive mechanisms to enhance learning performance. Due to limitations in system resources, servers in the Internet of Vehicles should prioritize the selection of high-utility datasets before initiating the learning process. Since existing utility evaluation schemes are costly and inefficient, we propose a knowledge trading framework based on prior utility evaluation and contract theory. Specifically, we first derive a utility evaluation function that captures the functional relationship between utility and privacy elements, such as privacy cost, sensitivity, and learning rounds. These elements are shared as hyperparameters among vehicles and servers for utility evaluation. In the presence of information asymmetry, we utilize the self-revealing property of contract theory to mitigate dishonest behavior. The learning process is modeled as an optimization function that incorporates utility and privacy costs. The complexity is further reduced by utilizing the monotonicity of the utility function. Experimental results based on multiple models and datasets illustrate that our scheme outperforms existing schemes in terms of convergence and accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call