Abstract

Artificial intelligence (AI) is the most looked up technology with a diverse range of applications across all the fields, whether it is intelligent transportation systems (ITS), medicine, healthcare, military operations, or others. One such application is autonomous vehicles (AVs), which comes under the category of AI in ITS. Vehicular Adhoc Networks (VANET) makes communication possible between AVs in the system. The performance of each vehicle depends upon the information exchanged between AVs. False or malicious information can perturb the whole system leading to severe consequences. Hence, the detection of malicious vehicles is of utmost importance. We use machine learning (ML) algorithms to predict the flaw in the data transmitted. Recent papers that used the stacking ML approach gave an accuracy of 98.44%. Decision tree-based random forest is used to solve the problem in this paper. We achieved accuracy and F1 score of 98.43% and 98.5% respectively on the VeRiMi dataset in this paper. Explainable AI (XAI) is the method and technique to make the complex black-box ML and deep learning (DL) models more interpretable and understandable. We use a particular model interface of the evaluation metrics to explain and measure the model’s performance. Applying XAI to these complex AI models can ensure a cautious use of AI for AVs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.