“Responsabilidad por daños de productos y servicios sanitarios equipados con sistemas de inteligencia artificial”
The incorporation of artificial intelligence systems into healthcare products and services is revolutionising the world of medicine. The effectiveness of these advances depends, to a large extent, on the existence of a regulation that guarantees fundamental rights and is capable of delimiting the non-contractual liability that must be demanded for damage caused by health products and services that incorporate artificial intelligence systems, and of identifying the liable parties. This paper studies the existing possibilities for the enforcement of this liability and the recently approved European regulation to this effect. La incorporación de sistemas de inteligencia artificial a productos y servicios sanitarios está revolucionando el mundo de la medicina. La eficacia de estos avances depende, en buena medida, de la existencia de una normativa que garantice los derechos fundamentales y sea capaz de delimitar la responsabilidad extracontractual que debe exigirse por los daños causados por productos y servicios sanitarios que incorporen sistemas de inteligencia artificial, y de identificar a los sujetos responsables. En este trabajo se estudian las posibilidades existentes para la exigencia de esta responsabilidad y la regulación europea aprobada recientemente al efecto.
- Research Article
3
- 10.17072/1995-4190-2022-58-683-708
- Jan 1, 2022
- Вестник Пермского университета. Юридические науки
Introduction: when studying legal issues related to safety and adequacy in the application of artificial intelligence systems (AIS), it is impossible not to raise the subject of liability accompanying the use of AIS. In this paper we focus on the study of the civil law aspects of liability for harm caused by artificial intelligence and robotic systems. Technological progress necessitates revision of many legislative mechanisms in such a way as to maintain and encourage further development of innovative industries while ensuring safety in the application of artificial intelligence. It is essential not only to respond to the challenges of the moment but also to look forward and develop new rules based on short-term forecasts. There is no longer any reason to claim categorically that the rules governing the institute of legal responsibility will definitely not require fundamental changes, contrary to earlier belief. This is due to the growing autonomy of AIS and the expansion of the range of their possible applications. Artificial intelligence is routinely employed in creative industries, decision-making in different fields of human activity, unmanned transportation, etc. However, there remain unresolved major issues concerning the parties liable in the case of infliction of harm by AIS, the viability of applying no-fault liability mechanisms, the appropriate levels of regulation of such relations; and discussions over these issues are far from being over. Purpose: basing on an analysis of theoretical concepts and legislation in both Russia and other countries, to develop a vision of civil law regulation and tort liability in cases when artificial intelligence is used. Methods: empirical methods of comparison, description, interpretation; theoretical methods of formal and dialectical logic; special scientific methods: legal-dogmatic and the method of interpretation of legal norms. Results: there is considerable debate over the responsibilities of AIS owners and users. In many countries, codes of ethics for artificial intelligence are accepted. However, what is required is legal regulation, for instance, considering an AIS as a source of increased danger; in the absence of relevant legal standards, it is reasonable to use a tort liability mechanism based on analogy of the law. Standardization in this area (standardization of databases, software, infrastructure, etc.) is also important – for identifying the AIS developers and operators to be held accountable; violation of standardization requirements may also be a ground for holding them liable under civil law. There appear new dimensions added to the classic legal notions such as the subject of harm, object of harm, and the party that has inflicted the harm, used with regard to both contractual and non-contractual liability. Conclusions: the research has shown that legislation of different countries currently provides soft regulation with regard to liability for harm caused by AIS. However, it is time to gradually move from the development of strategies to practical steps toward the creation of effective mechanisms aimed at minimizing the risks of harm without any persons held liable. Since the process of developing AIS involves many participants with an independent legal status (data supplier, developer, manufacturer, programmer, designer, user), it is rather difficult to establish the liable party in case something goes wrong, and many factors must be taken into account. Regarding infliction of harm to third parties, it seems logical and reasonable to treat an AIS as a source of increased danger; and in the absence of relevant legal regulations, it would be reasonable to use a tort liability mechanism by analogy of the law. The model of contractual liability requires the development of common approaches to defining the product and the consequences of violation of the terms of the contract.
- Research Article
28
- 10.1007/s00146-022-01572-2
- Oct 2, 2022
- Ai & Society
Public entities around the world are increasingly deploying artificial intelligence (AI) and algorithmic decision-making systems to provide public services or to use their enforcement powers. The rationale for the public sector to use these systems is similar to private sector: increase efficiency and speed of transactions and lower the costs. However, public entities are first and foremost established to meet the needs of the members of society and protect the safety, fundamental rights, and wellbeing of those they serve. Currently AI systems are deployed by the public sector at various administrative levels without robust due diligence, monitoring, or transparency. This paper critically maps out the challenges in procurement of AI systems by public entities and the long-term implications necessitating AI-specific procurement guidelines and processes. This dual-prong exploration includes the new complexities and risks introduced by AI systems, and the institutional capabilities impacting the decision-making process. AI-specific public procurement guidelines are urgently needed to protect fundamental rights and due process.
- Research Article
- 10.17223/15617793/502/19
- Jan 1, 2024
- Vestnik Tomskogo gosudarstvennogo universiteta
The article analyzes the influence of artificial intelligence on various spheres of social life: educational, cultural, economic, and others, from the standpoint of understanding the role of a person and setting their thinking under the influence of artificial intelligence systems. The influence of artificial intelligence on the spheres of human life raises the inevitable question of the transformation of the legal norms governing these areas. No matter how technologically advanced artificial intelligence systems are, there is always a risk of making an incorrect decision, which can be associated with causing damage to property, harm to life and health. An important and conceptual problematic areas of regulation is the development of a model of responsibility for causing harm using artificial intelligence and robotics systems. Approaches to identifying the culprit are considered in the article. Several situations stand out when deciding the issue of responsibility for causing harm when using artificial intelligence systems. Firstly, situations of deliberate distortion of program code at the stage of creating or training an artificial intelligence system are possible. In such a situation, of course, the responsibility falls on the person involved in the creation or training. Given that, as a rule, several persons are involved in such a process, difficulties may arise with the identification of a specific guilty subject, similarly to the situation when a violent crime is committed by a group of persons and a fatal blow is inflicted by one of the members of the group. Secondly, situations are possible when harm is caused due to the illegal seizure of control of the artificial intelligence system. In such a situation, responsibility also lies with the attacker - a physical person. Thirdly, there may be situations when, in the process of self-learning, artificial intelligence systems come to certain conclusions that do not depend on the actions of developers, and, as a result, harm is caused. Disputes about the responsibility of such artificial intelligence just arise in the event of the autonomy of its decision, that is, regardless of human actions. It is concluded that at this stage our society is not ready for the recognition of artificial intelligence as an independent subject capable of bearing responsibility, especially when it comes to the criminal legal sphere. To resolve the issue of the possibility of recognition (or non-recognition) of artificial intelligence as an independent entity of responsibility, it is necessary to proceed from the general provisions of the theory of legal responsibility, correlating its constituent elements with the characteristics of artificial intelligence systems, which requires independent comprehension and research. The authors declare no conflicts of interests.
- Research Article
- 10.24144/2307-3322.2024.86.2.36
- Jan 6, 2025
- Uzhhorod National University Herald. Series: Law
The article examines the legal aspects of ensuring the patient’s right to informed voluntary consent in the provision of psychiatric care using artificial intelligence (AI) systems. Overall, the use of AI opens new possibilities for the diagnosis and treatment of mental disorders, offering significant potential to enhance the effectiveness of psychiatric care. However, the application of these technologies introduces various risks for patients, particularly concerning the protection of autonomy, the transparency of AI algorithms, and the security of personal data. Patients with mental disorders represent a particularly vulnerable group requiring additional legal guarantees in decision-making regarding treatment, especially when innovative technologies are involved. Based on an analysis of existing technologies, the authors identify a number of risks associated with the use of AI systems in psychiatric care, including: 1) violations of personal data confidentiality; 2) risks associated with decisions made by AI systems; 3) potential discrimination based on gender, race, religion, or other characteristics; 4) misuse in medical practice through the use of AI; 5) risks arising from malfunctions in AI systems; 6) other potential hazards. To mitigate these risks, the article considers legal regulatory measures, including the introduction of European legislation such as the AI Act, certification implementation, and the establishment of effective mechanisms for informed voluntary consent to AI use in psychiatry, given the high risks posed by this technology. The authors note that Ukrainian legislation currently lacks adequate mechanisms for obtaining informed consent in the use of AI for psychiatric care. The article proposes improvements to Ukrainian regulatory acts through the development of a separate consent form for the use of AI systems in psychiatric assessment or treatment, which would help to avoid the legal risks inherent in AI systems. Such a consent form would include detailed information for the patient about the specific AI systems to be used, their nature, purpose, and estimated duration of use. It would also inform the patient that the data collected and processed by the AI system would be protected according to data protection legislation, and it would include a verbal explanation of risks by the physician, as well as the options for choosing alternative treatment methods based on the doctor’s recommendations. The conclusions emphasize the importance of advancing national legislation to align with the AI Development Concept and international certification standards. This will ensure the protection of patients’ rights and foster the effective integration of AI in the field of psychiatric care.
- Book Chapter
- 10.3233/atde250917
- Oct 1, 2025
The possibility of neuro-linguistic textual identification of intelligent systems (IS) and artificial intelligence (AI) systems is investigated. To set the task, intelligent systems were assigned a task in two languages, and artificial intelligence systems were assigned in four languages, using AI systems of different generations. As part of the study, a specialized software package was used to evaluate information parameters, as well as an information analyzer designed for neuro-linguistic identification of texts. The results make it possible to use information characteristics as parameters of neuro-linguistic identification of artificial intelligence (AI) and intelligent systems (IS) systems. The results of the study showed that during the transition from one system to another, the parameters of neuro-linguistic text identification change both in the study of intelligent systems and in the study of artificial intelligence systems. In the study of AI systems, the parameters of neuro-linguistic text identification change when switching from one language to another in one neural network, when changing neural networks while maintaining the same language. In the study of intelligent systems, it was revealed that the parameters change during the transition from one language to another, when changing the intelligent system while maintaining one language. This study makes it possible to use information characteristics as parameters of neuro-linguistic textual identification of artificial intelligence systems and intelligent systems.
- Research Article
- 10.26565/2226-0994-2024-71-7
- Dec 23, 2024
- The Journal of V. N. Karazin Kharkiv National University, Series "Philosophy. Philosophical Peripeteias"
The question of expediency and the principal possibility of machine imitation of human intellect from the point of view of evaluating the perspectives of various directions of development of artificial intelligence systems is discussed. It is shown that even beyond this practical aspect, the solution to the question about the principal possibility of creating a machine equivalent of the human mind is of great importance for understanding the nature of human thinking, consciousness and mental in general. It is noted that the accumulated experience of creating various systems of artificial intelligence, as well as the currently available results of studies of human intelligence and human consciousness in philosophy and psychology allow us to give a preliminary assessment of the prospects of creating an algorithmic artificial system, equal in its capabilities to human intelligence. The analysis of the drawbacks revealed in the use of artificial intelligence systems by mass users and in scientific research is carried out. The key disadvantages of artificial intelligence systems are the inability to independently set goals, the inability to form a consolidated «opinion» when working with divergent data, the inability to objectively evaluate the results obtained and generate revolutionary new ideas and approaches. The disadvantages of the «second level» are the insufficiency of information accumulated by mankind for further training of artificial intelligence systems, the resulting training of models on the content partially synthesized by artificial intelligence systems themselves, which leads to «forgetting» part of the information obtained during training and increasing the cases of issuing unreliable information. This, in turn, makes it necessary to check the reliability of each answer given by the artificial intelligence system whenever critical information is processed, which, against the background of the plausibility of the data given by artificial intelligence systems and a comfortable form of their presentation, requires the user to have well-developed critical thinking. It is concluded that the main advantage of artificial intelligence systems is that they can significantly increase the efficiency of information retrieval and primary processing, especially when dealing with large data sets. The importance of the ethical component in artificial intelligence and the creation of a regulatory framework that introduces responsibility for the harm that may be caused by the use of artificial intelligence systems is substantiated, especially for multimodal artificial intelligence systems. The conclusion is made that the risks associated with the use of multimodal artificial intelligence systems consistently increase in the case of realization in them of such functions of human consciousness as will, emotions and following moral principles.
- Research Article
- 10.21202/jdtl.2025.7
- Mar 27, 2025
- Journal of Digital Technologies and Law
Objective: to identify key ethical, legal and social challenges related to the use of artificial intelligence in healthcare; to develop recommendations for creating adaptive legal mechanisms that can ensure a balance between innovation, ethical regulation and the protection of fundamental human rights. Methods: a multidimensional methodological approach was implemented, integrating classical legal analysis methods with modern tools of comparative jurisprudence. The study covers both the fundamental legal regulation of digital technologies in the medical field and the in-depth analysis of the ethical, legal and social implications of using artificial intelligence in healthcare. Such an integrated approach provides a comprehensive understanding of the issues and well-grounded conclusions about the development prospects in this area.Results: has revealed a number of serious problems related to the use of artificial intelligence in healthcare. These include data bias, nontransparent complex algorithms, and privacy violation risks. These problems can undermine public confidence in artificial intelligence technologies and exacerbate inequalities in access to health services. The authors conclude that the integration of artificial intelligence into healthcare should take into account fundamental rights, such as data protection and non-discrimination, and comply with ethical standards.Scientific novelty: the work proposes effective mechanisms to reduce risks and maximize the potential of artificial intelligence under crises. Special attention is paid to regulatory measures, such as the impact assessment provided for by the Artificial Intelligence Act. These measures play a key role in identifying and minimizing the risks associated with high-risk artificial intelligence systems, ensuring compliance with ethical standards and protection of fundamental rights.Practical significance: adaptive legal mechanisms were developed, that support democratic norms and respond promptly to emerging challenges in public healthcare. The proposed mechanisms allow achieving a balance between using artificial intelligence for crisis management and human rights. This helps to build confidence in artificial intelligence systems and their sustained positive impact on public healthcare.
- Research Article
1
- 10.36887/2524-0455-2024-2-1
- Mar 26, 2024
- Actual problems of innovative economy and law
The article defines the admissibility conditions for the practical use of conclusions (solutions) regarding artificial intelligence in law enforcement activities. A warning was expressed that depending on the circumstances of its specific application and use and the level of technological development, artificial intelligence may create risks and harm state or private interests and the fundamental rights of individuals. The admissibility of using artificial intelligence systems and conclusions (decisions) of artificial intelligence in law enforcement activities is established as a reason for conducting an additional check but not a basis for making a decisive decision by a law enforcement and law enforcement body. Attention is focused on the fact that artificial intelligence systems help law enforcement officers make decisions and not make decisions instead of law enforcement officers. Modern scientific views on using artificial intelligence systems in law enforcement activities are analyzed. The guiding provisions of the draft legislative resolution of the European Parliament on the proposal for the regulation of the European Parliament and the Council on the establishment of harmonized rules on artificial intelligence (Law on artificial intelligence) and separate legal acts of Ukraine in the field of development and use of artificial intelligence technologies are analyzed. It is concluded that there needs to be a proper scientific substantiation of the permissible limits (legal, ethical) of the use of conclusions (decisions) of artificial intelligence in law enforcement activities and the lack of specialists who can create and properly control artificial intelligence technologies. The expediency of developing the Code of Ethics for artificial intelligence with the participation of a wide range of interested parties, including law enforcement officers, is supported. It is noted that there is a need to bring the current legislation in the field of using artificial intelligence technologies into compliance with international legal acts and established standards, in particular regarding the admissibility (acceptability) of using the conclusions (decisions) of artificial intelligence in law enforcement activities and increasing the level of professional training of specialists to provide the field of artificial intelligence technologies with qualified staff capable of monitoring the process of applying artificial intelligence technologies in law enforcement activities. Keywords: artificial intelligence, artificial intelligence technologies, law enforcement activities, law enforcement agencies.
- Research Article
- 10.54664/brem6290
- Dec 21, 2022
- De Jure
The study explores the issue of legal personality and liability of artificial intelligence (AI) systems. A real AI should have a will and self-awareness, but, at this point, there are mainly systems with a collective “cloud” intelligence that is located outside of them, supported by people (Sofia, the chatbot Miraya, the chatbot Tai, the xenobots). It is important to be clear about the fact whether robots are still only a “means”, a “tool” that facilitates human life, or whether they already have qualities that make them independent entities. Currently, AI systems are treated as objects of law. Granting legal personality similar to that of legal entities is not a solution as well because of their specific nature. If, in the future, intelligent systems become independent and emancipated from the human beings that created them, they could be considered a new specific subject – a legal person sui generis. The regulatory framework of international organizations in this area already places robots in the category of “electronic person” (EU) and binds their legal status to the protection of basic human rights. At this point, a number of practical issues are yet to be resolved – identifiability, establishment of a register, and up-to-dateness of the data in it. The possible granting of legal personality to AI systems, even specific or limited one, raises the question of the rights of robots themselves (procedural legal capacity, property rights, labour rights, tax legal personality), as well as of the responsibility for damages and their compensation. One of the most important issues in the development of intelligent machines is the extent to which we should allow them to make autonomous or automated decisions. Algorithms, which are initially set and related to the protection of fundamental human rights, should be stable, or “locked” for changes by artificial intelligence systems in the context of their improvement and self-learning. The issue of human control is important, especially in cases where decisions might affect human life, health, and social support. The rapid development of digital technologies should make us think about a future in which AI systems can deviate so much from the basic algorithms set by humans that joint and individual financial liability can be reached. The theory also discusses the issue of the applicability of criminal liability to robots.
- Research Article
25
- 10.3390/jrfm14120604
- Dec 13, 2021
- Journal of Risk and Financial Management
Purpose: Technology initiatives are now incorporated into a wide range of business domains. The objective of this paper is to explore the possible effects that Artificial intelligence systems have on entrepreneurs’ decision-making, through the mediation of customer preference and industry benchmark. Design/methodology/approach: This is a non-empirical review of the literature and the development of a conceptual model. Searches were conducted in key academic databases, such as Emerald Online Journals, Taylor and Francis Online Journals, JSTOR Online Journals, Elsevier Online Journals, IEEE Xplore, and Directory of Open Access Journals (DOAJ) for papers which focused on Artificial intelligence (AI), Entrepreneurial decision-making, Customer preference, Industry benchmarks, and Employee involvement. In total, 25 articles met the predefined criteria and were used. Findings: The study proposes that Artificial intelligence systems can facilitate better decision-making from the entrepreneurial perspective. In addition, the study demonstrates that employees, as stakeholders, can moderate the relationship between Artificial intelligence systems and better decision-making for entrepreneurs with their involvement. Moreover, the study demonstrates that customer preference and industry benchmark can mediate the relationship between Artificial intelligence systems and better entrepreneur decision-making. Research limitations/implications: The study assumes a perfect ICT environment for the smooth operation of Artificial intelligence systems. However, this might not always be the case. The study does not consider the personal disposition of entrepreneurs in terms of ICT usage and adoption. Practical implications: This study proposes that entrepreneurial decision-making is enriched in an environment of Artificial intelligence systems, which is complemented by customer preference, industry benchmark, and employee involvement. This finding provides entrepreneurs with a possible technological tool for better decision-making, highlighting the endless options offered by Artificial intelligence systems. Social Implications: The introduction of AI in the business decision-making process comes with many social issues in relation to the impact machines have on humans and society. This paper suggests how this new technology should be used without destroying society. Originality/value: This conceptual framework serves as a valuable organizational spectrum for entrepreneurial development. In addition, this study makes a valuable contribution to entrepreneurial development through Artificial intelligence systems.
- Research Article
- 10.36433/kacla.2025.8.2.51
- Aug 31, 2025
- Korea Anti-Corruption Law Association
Audit agencies such as the United States, the United Kingdom, the Netherlands, and Brazil conduct audits by artificial intelligence systems to ensure the effectiveness of the return of illegal supply and demand, illegal and unfair measures of duties, and the appropriateness of audits. In Korea, the Framework Act on Artificial Intelligence was enacted in 2025, and Article 20 of the General Act on Public Admiaiatration on Administration stipulates the automatic disposition of artificial intelligence for administrative disposition of binding acts. The Audit Office Act and the Public Audit Act require the establishment and operation of an audit-related information system, but it cannot be concluded as an artificial intelligence information system because it does not define the concept of the information system. Since there are no prestigious regulations on the scope and target of audit activities by audit agencies, standards, confidentiality, registration and certification of audit artificial intelligence systems, etc., securing predictability, transparency, and legality of artificial intelligence audits is not guaranteed. In addition, the concept of artificial intelligence audit should include not only the audit of the artificial intelligence system, but also all audit activities using the artificial intelligence system by auditors or audit agencies. In particular, it is necessary to establish a third-party auditor system as well as internal auditors and external auditors for audit by artificial intelligence systems. The U.S. California bill on artificial intelligence audit was proposed to the California Legislature in February 2025. It establishes an artificial intelligence system on the Internet of state audit and operation agencies to database information on audits, prepare procedures for registration, certification, and verification of artificial intelligence auditors and artificial intelligence systems, as well as disclosure and confidentiality of audit information, the subject of information to be provided to artificial intelligence systems, the explainability of artificial intelligence audits, and the obligation to store audit information for 10 years, but administrative or public institutions do not mandate the introduction of artificial intelligence systems. In this paper, after establishing the concept of artificial intelligence audit, the possibility of expanding artificial intelligence of audit in the intelligent information society and measures to prevent corruption are reviewed. In addition, since the category of artificial intelligence audit is unclear in Korea's artificial intelligence law and audit-related laws, the scope of artificial intelligence audit is clearly identified, and implications for Korea are sought by reviewing and analyzing artificial intelligence audit cases in the United States and the Netherlands in the public administration area. In addition, the legal task of artificial intelligence audit is ⅰ) the enactment of the Artificial Intelligence Audit Act, ⅱ) the acceptability of automatic decision-making of artificial intelligence audit, ⅲ) clarification of the standards and scope of artificial intelligence audit, ⅳ) legal task for the use of artificial intelligence to prevent fraud and illegal supply and demand, and ⅴ) the issue of strengthening new technology capabilities for future auditors and audit institutions are reviewed in five taps and suggested ways to improve them.
- Research Article
- 10.36433/kacla.2025.8.2.3
- Aug 31, 2025
- Korea Anti-Corruption Law Association
Audit agencies such as the United States, the United Kingdom, the Netherlands, and Brazil conduct audits by artificial intelligence systems to ensure the effectiveness of the return of illegal supply and demand, illegal and unfair measures of duties, and the appropriateness of audits. In Korea, the Framework Act on Artificial Intelligence was enacted in 2025, and Article 20 of the General Act on Public Admiaiatration on Administration stipulates the automatic disposition of artificial intelligence for administrative disposition of binding acts. The Audit Office Act and the Public Audit Act require the establishment and operation of an audit-related information system, but it cannot be concluded as an artificial intelligence information system because it does not define the concept of the information system. Since there are no prestigious regulations on the scope and target of audit activities by audit agencies, standards, confidentiality, registration and certification of audit artificial intelligence systems, etc., securing predictability, transparency, and legality of artificial intelligence audits is not guaranteed. In addition, the concept of artificial intelligence audit should include not only the audit of the artificial intelligence system, but also all audit activities using the artificial intelligence system by auditors or audit agencies. In particular, it is necessary to establish a third-party auditor system as well as internal auditors and external auditors for audit by artificial intelligence systems. The U.S. California bill on artificial intelligence audit was proposed to the California Legislature in February 2025. It establishes an artificial intelligence system on the Internet of state audit and operation agencies to database information on audits, prepare procedures for registration, certification, and verification of artificial intelligence auditors and artificial intelligence systems, as well as disclosure and confidentiality of audit information, the subject of information to be provided to artificial intelligence systems, the explainability of artificial intelligence audits, and the obligation to store audit information for 10 years, but administrative or public institutions do not mandate the introduction of artificial intelligence systems. In this paper, after establishing the concept of artificial intelligence audit, the possibility of expanding artificial intelligence of audit in the intelligent information society and measures to prevent corruption are reviewed. In addition, since the category of artificial intelligence audit is unclear in Korea's artificial intelligence law and audit-related laws, the scope of artificial intelligence audit is clearly identified, and implications for Korea are sought by reviewing and analyzing artificial intelligence audit cases in the United States and the Netherlands in the public administration area. In addition, the legal task of artificial intelligence audit is ⅰ) the enactment of the Artificial Intelligence Audit Act, ⅱ) the acceptability of automatic decision-making of artificial intelligence audit, ⅲ) clarification of the standards and scope of artificial intelligence audit, ⅳ) legal task for the use of artificial intelligence to prevent fraud and illegal supply and demand, and ⅴ) the issue of strengthening new technology capabilities for future auditors and audit institutions are reviewed in five taps and suggested ways to improve them.
- Research Article
12
- 10.1093/ijlit/eaac018
- Nov 21, 2022
- International Journal of Law and Information Technology
The European Union’s General Data Protection Regulation tasks organizations to perform a Data Protection Impact Assessment (DPIA) to consider fundamental rights risks of their artificial intelligence (AI) system. However, assessing risks can be challenging, as fundamental rights are often considered abstract in nature. So far, guidance regarding DPIAs has largely focussed on data protection, leaving broader fundamental rights aspects less elaborated. This is problematic because potential negative societal consequences of AI systems may remain unaddressed and damage public trust in organizations using AI. Towards this, we introduce a practical, four-Phased framework, assisting organizations with performing fundamental rights impact assessments. This involves organizations (i) defining the system’s purposes and tasks, and the responsibilities of parties involved in the AI system; (ii) assessing the risks regarding the system’s development; (iii) justifying why the risks of potential infringements on rights are proportionate; and (iv) adopt organizational and/or technical measures mitigating risks identified. We further indicate how regulators might support these processes with practical guidance.
- Research Article
1
- 10.1007/s43681-024-00560-0
- Sep 2, 2024
- AI and Ethics
Artificial intelligence systems can expand the capabilities and enhance the efficiency of law enforcement agencies preventing, investigating, detecting, and prosecuting criminal offences in the European Union. At the same time, the deployment of artificial intelligence in the security domain often raises numerous legal and ethical concerns. The ALIGNER Fundamental Rights Impact Assessment is an operational tool, rooted in fundamental rights and in the principles of AI ethics, ready to be integrated in the AI governance measures of European law enforcement agencies to inform their decision-making processes and ensure compliance with the recently adopted Artificial Intelligence Act. This paper first introduces the main tensions between law enforcement AI and fundamental rights, as enshrined in the Charter of Fundamental Rights of the European Union; then, it gives an overview of the main developments and best practices in AI governance and their relationship with fundamental rights as well as AI ethics; and finally, it describes the structure of the ALIGNER Fundamental Rights Impact Assessment.
- Research Article
1
- 10.32417/1997-4868-2024-24-03-440-449
- Mar 26, 2024
- Agrarian Bulletin of the
Abstract. The problem of the quality of managerial decisions is one of the most acute problems of agriculture. Their quality can be improved with the use of digital technologies, including the use of artificial intelligence (AI) systems. The purpose of the study is to clarify the main stages of managerial decision-making, taking into account the use of AI systems. The scientific novelty lies in the development of a structural model for making a managerial decision, taking into account the use of AI systems, the main components of this process are identified. The research methods were the analysis of publications in the WoS scientific citation network on the topics “agriculture” and “artificial intelligence”, as well as the abstract-logical method in the analysis of the main stages of making a managerial decision. The results of the study were the determination of the composition and content of the stages of the procedural decision invariant, taking into account the use of artificial intelligence systems. The use of artificial intelligence systems allows diagnosing the occurrence of problems in crop production, animal husbandry, and technical systems at an early stage. Data collection and analysis in the process of making a managerial decision using AI systems includes direct data collection using sensors, cameras, scanners, etc., their cleaning and preliminary analysis, exploratory and statistical analysis, data modeling and interpretation of results. The use of AI systems will make it possible to operate with large data sets from agricultural production facilities, which will reduce uncertainty in making managerial decisions. The analysis of alternatives and the development of a management decision using AI systems turns off the forecasting of agricultural development indicators in a given system of constraints, the generation of alternative solutions and the choice of the optimal alternative, the acceptance or ignoring of the proposed alternatives. AI systems can be used to automate and optimize the process of implementing management decisions, monitoring and controlling management decisions. The use of AI systems to automate management decision-making processes in agriculture can help improve management efficiency.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.