Counterfactual Explanations via Locally-guided Sequential Algorithmic Recourse
Counterfactuals operationalised through algorithmic recourse have become a powerful tool to make artificial intelligence systems explainable. Conceptually, given an individual classified as \(y\) – the factual – we seek actions such that their prediction becomes the desired class \(y^{\prime}\) – the counterfactual. This process offers algorithmic recourse that is (1) easy to customise and interpret, and (2) directly aligned with the goals of each individual. However, the properties of a “good” counterfactual are still largely debated; it remains an open challenge to locate an effective counterfactual along with its corresponding recourse. Some strategies use gradient-driven methods, but these offer no guarantees on the feasibility of the recourse and are open to adversarial attacks on carefully created manifolds. This can lead to unfairness and lack of robustness. Other methods are data-driven, which mostly addresses the feasibility problem at the expense of privacy, security, and secrecy as they require access to the entire training data set. Here, we introduce a model-agnostic technique that composes feasible and actionable counterfactual explanations using locally-acquired information at each step of the algorithmic recourse. Our explainer preserves the privacy of users by only leveraging data that it specifically requires to construct actionable algorithmic recourse, and protects the model by offering transparency solely in the regions deemed necessary for the intervention.
- Book Chapter
- 10.3233/atde250917
- Oct 1, 2025
The possibility of neuro-linguistic textual identification of intelligent systems (IS) and artificial intelligence (AI) systems is investigated. To set the task, intelligent systems were assigned a task in two languages, and artificial intelligence systems were assigned in four languages, using AI systems of different generations. As part of the study, a specialized software package was used to evaluate information parameters, as well as an information analyzer designed for neuro-linguistic identification of texts. The results make it possible to use information characteristics as parameters of neuro-linguistic identification of artificial intelligence (AI) and intelligent systems (IS) systems. The results of the study showed that during the transition from one system to another, the parameters of neuro-linguistic text identification change both in the study of intelligent systems and in the study of artificial intelligence systems. In the study of AI systems, the parameters of neuro-linguistic text identification change when switching from one language to another in one neural network, when changing neural networks while maintaining the same language. In the study of intelligent systems, it was revealed that the parameters change during the transition from one language to another, when changing the intelligent system while maintaining one language. This study makes it possible to use information characteristics as parameters of neuro-linguistic textual identification of artificial intelligence systems and intelligent systems.
- Research Article
- 10.26565/2226-0994-2024-71-7
- Dec 23, 2024
- The Journal of V. N. Karazin Kharkiv National University, Series "Philosophy. Philosophical Peripeteias"
The question of expediency and the principal possibility of machine imitation of human intellect from the point of view of evaluating the perspectives of various directions of development of artificial intelligence systems is discussed. It is shown that even beyond this practical aspect, the solution to the question about the principal possibility of creating a machine equivalent of the human mind is of great importance for understanding the nature of human thinking, consciousness and mental in general. It is noted that the accumulated experience of creating various systems of artificial intelligence, as well as the currently available results of studies of human intelligence and human consciousness in philosophy and psychology allow us to give a preliminary assessment of the prospects of creating an algorithmic artificial system, equal in its capabilities to human intelligence. The analysis of the drawbacks revealed in the use of artificial intelligence systems by mass users and in scientific research is carried out. The key disadvantages of artificial intelligence systems are the inability to independently set goals, the inability to form a consolidated «opinion» when working with divergent data, the inability to objectively evaluate the results obtained and generate revolutionary new ideas and approaches. The disadvantages of the «second level» are the insufficiency of information accumulated by mankind for further training of artificial intelligence systems, the resulting training of models on the content partially synthesized by artificial intelligence systems themselves, which leads to «forgetting» part of the information obtained during training and increasing the cases of issuing unreliable information. This, in turn, makes it necessary to check the reliability of each answer given by the artificial intelligence system whenever critical information is processed, which, against the background of the plausibility of the data given by artificial intelligence systems and a comfortable form of their presentation, requires the user to have well-developed critical thinking. It is concluded that the main advantage of artificial intelligence systems is that they can significantly increase the efficiency of information retrieval and primary processing, especially when dealing with large data sets. The importance of the ethical component in artificial intelligence and the creation of a regulatory framework that introduces responsibility for the harm that may be caused by the use of artificial intelligence systems is substantiated, especially for multimodal artificial intelligence systems. The conclusion is made that the risks associated with the use of multimodal artificial intelligence systems consistently increase in the case of realization in them of such functions of human consciousness as will, emotions and following moral principles.
- Research Article
- 10.17223/15617793/500/23
- Jan 1, 2024
- Vestnik Tomskogo gosudarstvennogo universiteta
The technological revolutionary achievements of the modern world inevitably pose a number of issues to humanity that require legal reflection. The most breakthrough achievements of the last few years are artificial intelligence systems. These systems are very successfully integrated into many spheres of life of the world community. To date, many countries have no systematic legal norms regulating the scope of artificial intelligence. The aim of this work is to formulate specific proposals for the legislative regulation of the field of artificial intelligence. To reach this aim, the authors analyzed legislative acts and law enforcement practice in the Russian Federation and in other modern technologically developed countries. Special attention was paid to determining the possibility of considering artificial intelligence as a subject of law. As part of the work, the authors also examined the doctrinal points of view of both the domestic and foreign scientific community. Based on the results of a comprehensive study, the authors propose to consider the possibility of attributing limited legal personality to some artificial intelligence systems: not all artificial intelligence systems should be given certain rights and responsibilities, but only those that have signs of strong artificial intelligence. In this regard, the authors propose to classify all artificial intelligence systems depending on how significant legal facts and legal consequences they are able to generate. It is advisable to structure the list of “advanced” artificial intelligence systems into one group – “strong intelligent systems”. The list of less developed artificial intelligence systems should be included in another group – “weak intelligent systems”. It is advisable classify artificial intelligence systems not on the principle of a generalized enumeration of their functionality, but on the principle of their specific literal enumeration. A specific list of “strong intelligent systems” will be formed and approved by the Government of the Russian Federation. Further, in connection with the proposed classification, comes the idea of attributing legal personality to “strong intelligent systems”. By analogy with the institution of legal entities, it is possible to provide a procedure for delegating certain rights and obligations to strong artificial intelligence, thereby bringing out a new subject in certain legal relations. Thus, the results of the study can outline certain boundaries of the work of artificial intelligence, contributing to the creation of specific constituent documents or protocols of functioning.
- Research Article
- 10.3844/jcssp.2007.195.198
- Apr 1, 2007
- Journal of Computer Science
Due to the large size of the database, the entire training dataset could not be used to construct the classifiers. One popular solution is to separate stream data into chunks, learn a base classifier from each chunk and then integrate all base classifiers to form Multiple classifier system (MCS). Sometimes this data streams does not include all the classes in its equal proportion as in the entire training data set. So we have newly introduced a method of Re-Sampling based on the statistical value of the class attribute. In the Proposed Method, the probability of occurrences of every class for the entire training data set have been estimated. Based on the probability, thresholds have been fixed for all the classes. When the data set have been selected randomly, the probabilities of the classes have been checked against the thresholds. The sample, which satisfies all the thresholds, is allowed to construct the Model. Otherwise, Re-sampling is performed and the process is repeated until the sample satisfies all the thresholds for the classes. The proposed method yields more accuracy than the one which does not have threshold on classes in the random samples. We have also compared the accuracy of different classifiers. Experimental results and comparative studies demonstrate the efficiency and efficacy of our method.
- Research Article
- 10.17223/15617793/502/19
- Jan 1, 2024
- Vestnik Tomskogo gosudarstvennogo universiteta
The article analyzes the influence of artificial intelligence on various spheres of social life: educational, cultural, economic, and others, from the standpoint of understanding the role of a person and setting their thinking under the influence of artificial intelligence systems. The influence of artificial intelligence on the spheres of human life raises the inevitable question of the transformation of the legal norms governing these areas. No matter how technologically advanced artificial intelligence systems are, there is always a risk of making an incorrect decision, which can be associated with causing damage to property, harm to life and health. An important and conceptual problematic areas of regulation is the development of a model of responsibility for causing harm using artificial intelligence and robotics systems. Approaches to identifying the culprit are considered in the article. Several situations stand out when deciding the issue of responsibility for causing harm when using artificial intelligence systems. Firstly, situations of deliberate distortion of program code at the stage of creating or training an artificial intelligence system are possible. In such a situation, of course, the responsibility falls on the person involved in the creation or training. Given that, as a rule, several persons are involved in such a process, difficulties may arise with the identification of a specific guilty subject, similarly to the situation when a violent crime is committed by a group of persons and a fatal blow is inflicted by one of the members of the group. Secondly, situations are possible when harm is caused due to the illegal seizure of control of the artificial intelligence system. In such a situation, responsibility also lies with the attacker - a physical person. Thirdly, there may be situations when, in the process of self-learning, artificial intelligence systems come to certain conclusions that do not depend on the actions of developers, and, as a result, harm is caused. Disputes about the responsibility of such artificial intelligence just arise in the event of the autonomy of its decision, that is, regardless of human actions. It is concluded that at this stage our society is not ready for the recognition of artificial intelligence as an independent subject capable of bearing responsibility, especially when it comes to the criminal legal sphere. To resolve the issue of the possibility of recognition (or non-recognition) of artificial intelligence as an independent entity of responsibility, it is necessary to proceed from the general provisions of the theory of legal responsibility, correlating its constituent elements with the characteristics of artificial intelligence systems, which requires independent comprehension and research. The authors declare no conflicts of interests.
- Research Article
- 10.36433/kacla.2025.8.2.51
- Aug 31, 2025
- Korea Anti-Corruption Law Association
Audit agencies such as the United States, the United Kingdom, the Netherlands, and Brazil conduct audits by artificial intelligence systems to ensure the effectiveness of the return of illegal supply and demand, illegal and unfair measures of duties, and the appropriateness of audits. In Korea, the Framework Act on Artificial Intelligence was enacted in 2025, and Article 20 of the General Act on Public Admiaiatration on Administration stipulates the automatic disposition of artificial intelligence for administrative disposition of binding acts. The Audit Office Act and the Public Audit Act require the establishment and operation of an audit-related information system, but it cannot be concluded as an artificial intelligence information system because it does not define the concept of the information system. Since there are no prestigious regulations on the scope and target of audit activities by audit agencies, standards, confidentiality, registration and certification of audit artificial intelligence systems, etc., securing predictability, transparency, and legality of artificial intelligence audits is not guaranteed. In addition, the concept of artificial intelligence audit should include not only the audit of the artificial intelligence system, but also all audit activities using the artificial intelligence system by auditors or audit agencies. In particular, it is necessary to establish a third-party auditor system as well as internal auditors and external auditors for audit by artificial intelligence systems. The U.S. California bill on artificial intelligence audit was proposed to the California Legislature in February 2025. It establishes an artificial intelligence system on the Internet of state audit and operation agencies to database information on audits, prepare procedures for registration, certification, and verification of artificial intelligence auditors and artificial intelligence systems, as well as disclosure and confidentiality of audit information, the subject of information to be provided to artificial intelligence systems, the explainability of artificial intelligence audits, and the obligation to store audit information for 10 years, but administrative or public institutions do not mandate the introduction of artificial intelligence systems. In this paper, after establishing the concept of artificial intelligence audit, the possibility of expanding artificial intelligence of audit in the intelligent information society and measures to prevent corruption are reviewed. In addition, since the category of artificial intelligence audit is unclear in Korea's artificial intelligence law and audit-related laws, the scope of artificial intelligence audit is clearly identified, and implications for Korea are sought by reviewing and analyzing artificial intelligence audit cases in the United States and the Netherlands in the public administration area. In addition, the legal task of artificial intelligence audit is ⅰ) the enactment of the Artificial Intelligence Audit Act, ⅱ) the acceptability of automatic decision-making of artificial intelligence audit, ⅲ) clarification of the standards and scope of artificial intelligence audit, ⅳ) legal task for the use of artificial intelligence to prevent fraud and illegal supply and demand, and ⅴ) the issue of strengthening new technology capabilities for future auditors and audit institutions are reviewed in five taps and suggested ways to improve them.
- Research Article
- 10.36433/kacla.2025.8.2.3
- Aug 31, 2025
- Korea Anti-Corruption Law Association
Audit agencies such as the United States, the United Kingdom, the Netherlands, and Brazil conduct audits by artificial intelligence systems to ensure the effectiveness of the return of illegal supply and demand, illegal and unfair measures of duties, and the appropriateness of audits. In Korea, the Framework Act on Artificial Intelligence was enacted in 2025, and Article 20 of the General Act on Public Admiaiatration on Administration stipulates the automatic disposition of artificial intelligence for administrative disposition of binding acts. The Audit Office Act and the Public Audit Act require the establishment and operation of an audit-related information system, but it cannot be concluded as an artificial intelligence information system because it does not define the concept of the information system. Since there are no prestigious regulations on the scope and target of audit activities by audit agencies, standards, confidentiality, registration and certification of audit artificial intelligence systems, etc., securing predictability, transparency, and legality of artificial intelligence audits is not guaranteed. In addition, the concept of artificial intelligence audit should include not only the audit of the artificial intelligence system, but also all audit activities using the artificial intelligence system by auditors or audit agencies. In particular, it is necessary to establish a third-party auditor system as well as internal auditors and external auditors for audit by artificial intelligence systems. The U.S. California bill on artificial intelligence audit was proposed to the California Legislature in February 2025. It establishes an artificial intelligence system on the Internet of state audit and operation agencies to database information on audits, prepare procedures for registration, certification, and verification of artificial intelligence auditors and artificial intelligence systems, as well as disclosure and confidentiality of audit information, the subject of information to be provided to artificial intelligence systems, the explainability of artificial intelligence audits, and the obligation to store audit information for 10 years, but administrative or public institutions do not mandate the introduction of artificial intelligence systems. In this paper, after establishing the concept of artificial intelligence audit, the possibility of expanding artificial intelligence of audit in the intelligent information society and measures to prevent corruption are reviewed. In addition, since the category of artificial intelligence audit is unclear in Korea's artificial intelligence law and audit-related laws, the scope of artificial intelligence audit is clearly identified, and implications for Korea are sought by reviewing and analyzing artificial intelligence audit cases in the United States and the Netherlands in the public administration area. In addition, the legal task of artificial intelligence audit is ⅰ) the enactment of the Artificial Intelligence Audit Act, ⅱ) the acceptability of automatic decision-making of artificial intelligence audit, ⅲ) clarification of the standards and scope of artificial intelligence audit, ⅳ) legal task for the use of artificial intelligence to prevent fraud and illegal supply and demand, and ⅴ) the issue of strengthening new technology capabilities for future auditors and audit institutions are reviewed in five taps and suggested ways to improve them.
- Research Article
- 10.1126/science.adw8151
- Sep 25, 2025
- Science (New York, N.Y.)
Cooperation, the process through which individuals work together to achieve common goals, is fundamental to human and animal societies and increasingly critical in artificial intelligence. Here, we investigated cooperation in mice and artificial intelligence systems, examining how they learn to actively coordinate their actions to obtain shared rewards. We identified key social behavioral strategies and decision-making processes in mice that facilitate successful cooperation. These processes are represented in the anterior cingulate cortex (ACC) and ACC activity causally contributes to cooperative behavior. We extended our findings to artificial intelligence systems by training artificial agents in a similar cooperation task. The agents developed behavioral strategies and neural representations reminiscent of those observed in the biological brain, revealing parallels between cooperative behavior in biological and artificial systems.
- Conference Article
1
- 10.5121/csit.2021.112403
- Dec 24, 2021
As automation is changing everything in today’s world, there is an urgent need for artificial intelligence, the basic component of today’s automation and innovation to have standards for software engineering for analysis and design before it is synthesized to avoid disaster. Artificial intelligence software can make development costs and time easier for programmers. There is a probability that society may reject artificial intelligence unless a trustworthy standard in software engineering is created to make them safe. For society to have more confidence in artificial intelligence applications or systems, researchers and practitioners in computing industry need to work not only on the cross-section of artificial intelligence and software engineering, but also on software theory that can serve as a universal framework for software development, most especially in artificial intelligence systems. This paper seeks to(a) encourage the development of standards in artificial intelligence that will immensely contribute to the development of software engineering industry considering the fact that artificial intelligence is one of the leading technologies driving innovation worldwide (b) Propose the need for professional bodies from philosophy, law, medicine, engineering, government, international community (such as NATO, UN), and science and technology bodies to develop a standardized framework on how AI can work in the future that can guarantee safety to the public among others. These standards will boost public confidence and guarantee acceptance of artificial intelligence applications or systems by both the end-users and the general public.
- Research Article
24
- 10.3390/jrfm14120604
- Dec 13, 2021
- Journal of Risk and Financial Management
Purpose: Technology initiatives are now incorporated into a wide range of business domains. The objective of this paper is to explore the possible effects that Artificial intelligence systems have on entrepreneurs’ decision-making, through the mediation of customer preference and industry benchmark. Design/methodology/approach: This is a non-empirical review of the literature and the development of a conceptual model. Searches were conducted in key academic databases, such as Emerald Online Journals, Taylor and Francis Online Journals, JSTOR Online Journals, Elsevier Online Journals, IEEE Xplore, and Directory of Open Access Journals (DOAJ) for papers which focused on Artificial intelligence (AI), Entrepreneurial decision-making, Customer preference, Industry benchmarks, and Employee involvement. In total, 25 articles met the predefined criteria and were used. Findings: The study proposes that Artificial intelligence systems can facilitate better decision-making from the entrepreneurial perspective. In addition, the study demonstrates that employees, as stakeholders, can moderate the relationship between Artificial intelligence systems and better decision-making for entrepreneurs with their involvement. Moreover, the study demonstrates that customer preference and industry benchmark can mediate the relationship between Artificial intelligence systems and better entrepreneur decision-making. Research limitations/implications: The study assumes a perfect ICT environment for the smooth operation of Artificial intelligence systems. However, this might not always be the case. The study does not consider the personal disposition of entrepreneurs in terms of ICT usage and adoption. Practical implications: This study proposes that entrepreneurial decision-making is enriched in an environment of Artificial intelligence systems, which is complemented by customer preference, industry benchmark, and employee involvement. This finding provides entrepreneurs with a possible technological tool for better decision-making, highlighting the endless options offered by Artificial intelligence systems. Social Implications: The introduction of AI in the business decision-making process comes with many social issues in relation to the impact machines have on humans and society. This paper suggests how this new technology should be used without destroying society. Originality/value: This conceptual framework serves as a valuable organizational spectrum for entrepreneurial development. In addition, this study makes a valuable contribution to entrepreneurial development through Artificial intelligence systems.
- Research Article
41
- 10.1016/j.fertnstert.2020.10.040
- Nov 1, 2020
- Fertility and Sterility
Predictive modeling in reproductive medicine: Where will the future of artificial intelligence research take us?
- Research Article
- 10.21564/2663-5704.49.229779
- May 26, 2021
- The Bulletin of Yaroslav Mudryi National Law University. Series:Philosophy, philosophies of law, political science, sociology
LAW IN DIGITAL REALITY
- Research Article
1
- 10.15539/khlj.57.4.3
- Dec 30, 2022
- Kyung Hee Law Journal
The development of science and technology in the 4th industrial revolution requires a change in our traditional thinking. The autonomy of artificial intelligence systems based on big data and machine learning is increasing day by day. These changes are urging a change in the basic paradigm of criminal law based on the correlation between 'free will and responsibility'. Accordingly, criminal law is facing the question of whether a new legal personhood should be recognized for artificial intelligence systems or artificial intelligence robots.
 Although the concept of artificial intelligence system does not seem to be unified yet, the European Union states that artificial intelligence system “means software implemented by a specific technique, and content and prediction through interaction with the surrounding environment within the scope of a purpose defined by humans, reasoning, and decision-making.” The representative characteristic of these AI systems is autonomy. Autonomy can be defined as “the ability to make decisions and execute them externally, independent of external influences or controls.”(Regarding who will take responsibility for the problems caused by artificial intelligence systems or artificial intelligence robots capable of autonomous judgment, the negative opinion that the ability to take responsibility cannot be acknowledged to artificial intelligence robots, and strong artificial intelligence systems rather than weak artificial intelligence systems As an autonomous subject of action, it is opposed to a positive view that it can acknowledge its responsibility.)
 It is necessary to think about whether artificial intelligence robots can be said to be human beings just as natural people are human beings, that is, whether we need to re-evaluate our ability to take responsibility for all beings. In the future, if artificial intelligence robots become more common than they are today, interactions with humans become more active, and the intellectual capabilities of artificial intelligence systems further improve, there is a political need to regulate them legally.
 The current legal system recognizes the subjectivity of rights by granting unlimited rights capacity only to natural persons, and recognizes the subjectivity of rights within a limited scope to corporations. However, since this is recognized by the provisions of the law rather than absolute, it is a legal policy issue that can vary depending on the era and society in terms of legal policy and to what extent the legal capacity or legal personality is to be granted. Although currently weak AI systems do not have full autonomy, considering the 'unpredictability' inherent in AI systems, it is necessary to concretize the discussion on granting legal personality to strong AI systems to come in the future. have.
- Research Article
26
- 10.1080/08982112.2022.2089854
- Jun 29, 2022
- Quality Engineering
Artificial intelligence (AI) systems are increasingly popular in many applications. Nevertheless, AI technologies are still developing, and many issues need to be addressed. Among those, the reliability of AI systems needs to be demonstrated so that AI systems can be used with confidence by the general public. In this paper, we provide statistical perspectives on the reliability of AI systems, focusing on the time dimension. That is, the system can perform its designed functionality for the intended period of time. We introduce a so-called “SMART” statistical framework for AI reliability research, which includes five components: Structure of the system, Metrics of reliability, Analysis of failure causes, Reliability assessment, and Test planning. We review traditional methods in reliability data analysis and software reliability, and discuss how those existing methods can be transformed for reliability modeling and assessment of AI systems. Different from traditional reliability studies, the focus of AI reliability is on the software system to include the training data. Thus, we describe recent developments in modeling and analysis of AI reliability for software systems. The paper outlines statistical research challenges in this area, including out-of-distribution detection, the effect of the training set, adversarial attacks, model accuracy, and uncertainty quantification. We discuss how those topics can be related to AI reliability, with illustrative examples. The final element of SMART (test planning), is critical for the demonstration of AI reliability. Therefore we discuss data collection and testing planning, highlighting methods for improving system design in order to achieve higher AI reliability. The paper closes with some concluding remarks.
- Supplementary Content
195
- 10.2471/blt.19.237487
- Feb 25, 2020
- Bulletin of the World Health Organization
The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.