Abstract

The characteristics of the artificial intelligence (AI) technologies make it complicated to determine who should bear the responsibility in the case of the violation of legally protected human interests (causing damage). In the heart of the article are following questions: Should a person be responsible for the damage caused by the AI technologies, if we bear in mind the "independent decision- making process" of the AI? Should liability be borne by producers, owners or users of AI technologies? Can existing tort liability regimes apply to damage from the AI? Which liability regime should be applied – fault liability or strict liability? The idea of introducing the new category of persons in law – the ePerson – has been left behind for now. Although the AI show certain levels of decision making autonomy, it is the (wo)man who has programed and produced the AI. For that reason, (wo)man should bear legal responsibility for the damage from AI. With that being said, the responsible persons are producers, owners and users of the AI, depending on the circumstances of the case. The last decade has been marked by European Union’s efforts made to find an optimal solution for the AI liability. The proposals have been made for the revision of product liability rules, incorporating this case of damage, as well as for the harmonization and adaptation of the rules of fault liability. If the defect may be found, product liability should be applied towards the producers. If that is not the case, either strict liability under the national law or fault liability should be applied. If the latter should be, there are set of procedural rules proposed by the European Union, regarding the assumption of causal link and guilt of the provider or user of the AI, benefiting the damaged person (claimant).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call