Abstract

When it comes to the application of new technology, the automotive industry is one of the most rapidly expanding industries in the world. The recent trend in this field is autonomous driving using machine learning (ML) techniques. The training of ML models that can provide human-like driving decisions requires a large amount of heterogeneous data to be collected from multiple vehicles for training, testing and validation of the autonomous driving system. This large volume of heterogeneous data can be obtained using connected vehicles, where each vehicle can share the collected data with a central server using vehicle-to-everything (V2X) communication. The objective of this work is to analyze and compare the performances of the ‘centralized’ and ‘federated’ approaches to training the ML models using V2X communication under various channel conditions. The specific application being considered for this work is the ‘prediction of the steering angle using a vision-based dataset’. The results obtained in our study indicate that: (i) even though the conventional ML approach may work reasonably well up to a certain bit error rate (BER) where the ML model is trained using noisy images, its performance degrades at higher BER values due to noise-overfitting, and (ii) the federated learning (FL) approach can indeed provide a better alternative to the centralized ML approach for the considered application, consuming less bandwidth.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.