Abstract

Artificial intelligence (AI) machines can process data independently, but they need human assistance to make decisions and control their behaviour. As AI technology progresses, it becomes difficult for humans to predict the outcome of calculations and inferences, which increases the potential risks associated with its operation. The introduction of AI has revolutionised the automotive industry, especially with the development of self-driving cars. While they offer convenient transportation, accidents may occur, and it's necessary to distinguish whether the AI system has autonomy or not, and in the scenario of an accident, it may be controversial who should be held accountable. Consequently, this article aims to explore the direction of future legislative policy in the event of an accident involving AI with a focus on self-driving cars in the automotive industry. In this regard, the authors propose a solution proportionally to the behaviour assessment of the user and developers respectively, and risk control measures during the development stage underscoring careful planning and management in real-world testing of unmanned vehicles for the integration of AI into daily life. Furthermore, it highlights the uncertainties about appropriate laws to regulate the phenomenon of self-driving vehicles and suggests the importance of complying not only with laws and regulations but also with ethical development and use.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call