Abstract

The rapid progression and widespread integration of Information and Communication Technology (ICT) have ushered in a new era of sweeping social and legal transformations. Among the many groundbreaking advancements, Artificial Intelligence has emerged as a pivotal force, permeating nearly every facet of our daily lives. From the realms of commerce and industry to healthcare, transportation, and entertainment, Artificial Intelligence technologies have become indispensable tools shaping the way we interact, work, and navigate the world around us. With its remarkable capabilities and ever-expanding reach, Artificial intelligence stands as a testament to humanity's relentless pursuit of innovation and the boundless potential of technology to revolutionize society. While completing all the tasks they are programmed for, Artificial Intelligence systems can perform actions, which could result in crimes if committed by humans. But crimes follow the reserve of law, therefore can be difficult to criminalize such crimes because of the lack of written law. Nevertheless, in modern legal systems, the structure of crimes doesn't only require the commission of a typical fact, but also the determination to do it. In this scenario, being Artificial Intelligence a non-human entity, the reconstruction of criminal responsibility is particularly difficult to theorize. This is mainly true because of the peculiar nature of the environment the machine lives in: the digital environment is made of a digital reality, and many of its actors (for example algorithms, protocols, and programs) are not even human and can only exist in that reality. This means that in this environment, machines can act, determine themselves and possibly commit crimes with or without a human user. This scenario makes it necessary to analyze Artificial Intelligence crimes in the light of common ones, using the ordinary law discipline. This analysis allows users (lawyers, judges, and scholars) to use three traditional liability models: "the perpetration-via-another", "the natural probable consequence", and "the direct liability". Through these models, users can assess whether the machine committed a crime. Nevertheless, the three liability models supra mentioned open the door to a totally modern scenario: the man-machine concurrence (the concurrence between man and Artificial Intelligence algorithm). In fact, if theorizing the liability of the machine comes with challenges, it is even more complicated to adapt to modern Constitutions the concurrence between the living and the digital. Indeed, it is necessary to assess whether a machine can commit crimes (or it is just an instrument), determine how the machine can concur with a human, and how much responsibility can be addressed to it. This paper wants to analyze the peculiarities of Artificial Intelligence, deconstruct three possible Artificial Intelligence liability models, and, finally, theorize the criminal participation man-machine through the lenses of Italian law.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.