Abstract

Background: Digital technologies are an important factor currently driving society’ development in various areas, affecting not only traditional spheres, such as medicine, manufacturing, and education, but also legal relations, including criminal proceedings. This is not just about using technologies related to videoconferencing, automated distribution, digital evidence, etc. Development is constantly and rapidly moving forward, and we are now facing issues related to the use of artificial intelligence technologies in criminal proceedings. Such changes also entail new threats and challenges – we are referring to the challenges of respecting fundamental human rights and freedoms in the context of technological development. In addition, there is the matter of ensuring the implementation of basic legal principles, such as the presumption of innocence, non-discrimination and the protection of the right to privacy. This concern arises when applying artificial intelligence systems in the criminal justice system. Methods: The general philosophical framework of this research consisted of axiological and hermeneutic approaches, which allowed us to conduct a value analysis of fundamental human rights and changes in their perception in the context of the AI application, as well as apply in-depth study and interpretation of legal texts. While building up the system of the basic principles of using AI systems in criminal justice, we used the system-structural and logical methods of research. The study also relied on the comparative law method in terms of comparing legal regulation and law enforcement practice in different legal systems. The method of legal modelling was applied to emphasise the main areas of possible application of AI systems in criminal proceedings. Results and Conclusions: The article identifies the main possible vectors of the use of artificial intelligence systems in criminal proceedings and assesses the feasibility and prospects of their implementation. In addition, it is stated that only using AI systems for auxiliary purposes carries minimal risks of interference with human rights and freedoms. Instead, other areas of AI adoption may significantly infringe rights and freedoms, and therefore the use of AI for such purposes should be fully controlled, verified and only subsidiary, and in certain cases, prohibited altogether.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call