Abstract

This article focuses on the problems of the application of AI as a tool of crime from the perspective of the norms and principles of Criminal law. The article discusses the question of how the legal framework in the area of culpability determination could be applied to offenses committed with the use of AI. The article presents an analysis of the current state in the sphere of criminal law for both intentional and negligent offenses as well as a comparative analysis of these two forms of culpability. 
 
 Part of the work is devoted to culpability in intentional crimes. Results of analysis in the paper demonstrate that the law-enforcer and the legislator should reconsider the approach to determining culpability in the case of the application of artificial intelligence systems for committing intentional crimes. As an artificial intelligence system, in some sense, has its own designed cognition and will, courts could not rely on the traditional concept of culpability in intentional crimes, where the intent is clearly determined in accordance with the actions of the criminal. 
 
 Criminal negligence is reviewed in the article from the perspective of a developer’s criminal liability. The developer is considered as a person who may influence on and anticipate harm caused by AI system that he/she created. If product developers are free from any form of criminal liability for harm caused by their products, it would lead to highly negative social consequences. The situation when a person developing AI system has to take into consideration all potential harm caused by the product also has negative social consequences. The authors conclude that the balance between these two extremums should be found. The authors conclude that the current legal framework does not conform to the goal of a culpability determination for the crime where AI is a tool.

Highlights

  • Artificial intelligence was created to make daily life easier

  • This paper studies the issue of how application of artificial intelligence influences on conceptions of culpability in criminal law

  • The aim of the paper is to highlight key legal considerations related to the AI and conception of culpability in criminal law

Read more

Summary

Introduction

Artificial intelligence was created to make daily life easier. For some people with a special attitude to society, a crime is their daily life. Almost 100% of child pornography which is illegal in almost all states is distributed in the Internet Another example of a technology that was designed with good intentions, actively used by criminals is Darknet. One of the suggested proposals is to implement ethical (or legal) rules into AI behavior principles as a universal remedy from harm caused by such systems It sounds logical but as Patrick Lin fairly commented on autonomous machines “One natural way to think about minimizing risk of harm from robots is to program them to obey our laws or follow a code of ethics. As the law and the ethics are so complicated systems of norms with high dependence on context of application, it is very difficult to design a machine which is impossible to apply as a tool of crime. Is it significant that a tool has its own will and freedom of making decisions? Could an autonomous system that is a black box even for the developer be a factor that affects the guilt of the accused?

Culpability in Criminal Law
Intentional Crimes
Criminal Negligence
Findings
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.