Abstract

The automotive industry has been enhancing autonomous driving systems utilizing the computation and communication networks embedded in vehicles (e.g., cellular networks and sensors) and roadside units (e.g., radar and cameras). Robust security and privacy requirements are essential in Intelligent Transportation Systems (ITS). To satisfy these requirements, most developed autonomous driving systems (e.g., Waymo and Tesla) use machine learning. Machine learning models trained on sensitive raw data promise improvements in performance; however, they cannot provide privacy for sensitive raw data and users. Federated learning advances privacy-preserving distributed machine learning by aggregating the model parameter updates from individual devices in a secure manner. Security Credential Management System (SCMS) for Vehicle to Everything (V2X) communication provides a guarantee for authentication in a privacy-preserving manner and punishes misbehaving vehicles through misbehavior reporting. In this paper, we design a secure aggregation protocol for privacy-preserving federated learning for vehicular networks. Our protocol allows a server to verify vehicles in a secure manner and is used to aggregate each vehicle-provided global model update for federated learning. We prove our protocol for security in the honest-but-curious framework and detect active adversary attacks, as well as show that it provides trust in different domains (e.g., SCMS and outside the domain of SCMS) and in a privacy-preserving manner for vehicles using SCMS. We analyze the process of federated learning in each vehicle and server while communicating during driving on several types of roads (e.g., local, urban, and rural) using cellular networks (LTE and 5G).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call