Abstract

The introduction of artificial intelligence (AI) tech-nology to Internet of Vehicles (IoV) brings the promising potential of enabling intelligent services to moving vehicles. However, training of conventional centralized AI model suffers from long access latency to cloud server and data privacy leakage of vehicle clients. To address these issues, federated learning (FL) has been introduced to IoV for distributed privacy-preserving model training. Nevertheless, to optimize the system overhead and model accuracy of FL, the selection of federated clients has become a critical problem. To overcome this challenge, a proximal policy optimization (PPO)-based client selection scheme has been proposed in this work for the federated AI model training in IoV. Specifically, the federated client selection problem is formulated as a Markov decision process (MDP) first, and a PPO-based algorithm has been developed for the MDP. Subsequently, the dataflow of the proposed client selection scheme is presented in detail. Simulations have been conducted based on practical scenario settings. Simulation results indicate that the proposed client selection scheme outperforms the double deep Q network (DDQN)-based scheme and biased selection scheme on the con-vergence time and test accuracy of federated model training with more stable system performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.