Abstract

Federated learning-based automotive navigation has recently received considerable attention, as it can potentially address the issue of weak global positioning system (GPS) signals under severe blockages, such as in downtowns and tunnels. Specifically, the data-driven navigation framework combines the position estimation offered by the high-sampling inertial measurement units and the position calibration provided by the low-sampling GPS signals. Despite its promise, the privacy preservation and flexibility of the participating users in the federated learning process are still problematic. To address these challenges, in this article, we propose an efficient, flexible, and privacy-preserving model aggregation scheme under a federated learning-based navigation framework named FedLoc. Specifically, our proposed scheme efficiently protects the locally trained model updates, flexibly supports the fluctuation of participants, and is robust against unregistered malicious users by exploiting a homomorphic threshold cryptosystem, together with the bounded Laplace mechanism and the skip list. We perform a detailed security analysis to demonstrate the security properties in terms of privacy preservation and dishonest user detection. In addition, we evaluate and compare the computational efficiency with two traditional schemes, and the simulation results show that our scheme greatly improves the computational efficiency during participant fluctuation. To validate the effectiveness of our scheme, we also show that only part of the model update is excluded from aggregation in the case of a dishonest user.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call