Transforming legal texts into computational logic: Enhancing next generation public sector automation through explainable AI decision support
Transforming legal texts into computational logic: Enhancing next generation public sector automation through explainable AI decision support
- Conference Article
- 10.1109/is.2006.348382
- Sep 1, 2006
Summary form only given. We present first some general remarks on challenges faced by modern information technology, notably when a human being is a relevant factor. These challenges are mainly related to inherent difficulties in solving some meta-problems, in particular broadly perceived decision making. We assume, on the one hand, business intelligence related perspective, augmented with elements of Web to fully use all available tools and resources. On the other hand, we assume a human centric computing perspective in the spirit of, for instance, Dertouzos's ideas. First, we present a brief account of modem approaches to real world decision making, emphasize the concept of a decision making process that involves more factors and aspects like: the use of own and external knowledge, involvement of various ,actors, aspects, etc., individual habitual domains, non-trivial rationality, different paradigms. As an example we mention Checkland's deliberative decision making (which is an important elements of his soft approach to systems analysis). After an analysis of specifics and difficulties encountered in many real world decision-making situations, we strongly advocate the use of computer based decision support systems. First, we briefly review the history of decision support systems, and then present a popular classification, starting from data driven to Web based and inter-organizational. We indicate that decision support systems should incorporate some sort of intelligence, and we first briefly mention some views of what intelligence may mean in this concept, and then assume some more pragmatic, though limited, view of intelligent decision support systems. We indicate possible advantages of using elements of fuzzy logic and soft computing, notably, Zadeh's computing with words to be able to somehow merge the ideas presented like: human centric computing, decision making processes, intelligent decision support, etc. Finally, we present an example of implementation in which the above-mentioned ideas have been to some extent implemented. This concern a data and document driven decision support system for a small to medium company in which, first, Zadeh's computing with words and perceptions paradigm is employed via linguistic database summaries, elements of Web intelligence are used to derive additional information, and the ideas of an intelligent decision support and human centric computing are shown to be synergistically combined. We finish with some general remarks emphasizing that fuzzy logic and soft computing, notably as exemplified by Zadeh's computing with words and perceptions may be viewed as providing just the right tools to solve the problems considered
- Book Chapter
- 10.71443/9789349552029-05
- Mar 4, 2025
The increasing complexity and dynamism of cloud environments have introduced significant cybersecurity challenges, necessitating advanced risk assessment methodologies capable of handling uncertainty and evolving threats. Traditional risk management frameworks often rely on static and rule-based mechanisms, which lack adaptability in dynamic cloud ecosystems. To address these limitations, this chapter explores the integration of fuzzy logic and evolutionary computation for adaptive cyber risk management in cloud environments. Fuzzy logic provides a powerful framework for modeling imprecise security parameters and uncertainty in threat landscapes, enabling more flexible and context-aware risk assessment. Meanwhile, evolutionary computation offers an adaptive mechanism to optimize cybersecurity strategies through heuristic learning and intelligent decision-making. The chapter presents a hybrid risk assessment framework that leverages fuzzy inference systems to quantify risk levels and evolutionary algorithms to dynamically optimize security controls, it examines the scalability of fuzzy-evolutionary approaches in large-scale cloud infrastructures and their effectiveness in mitigating real-time cyber threats, such as zero-day attacks, insider threats, and advanced persistent threats. The potential integration of explainable AI (XAI), deep learning, and quantum computing in enhancing fuzzy-based risk assessment models is also discussed. This research contributes to the advancement of self-learning, adaptive cyber defense mechanisms capable of proactively mitigating risks in multi-cloud and hybrid-cloud environments. The proposed framework ensures improved threat intelligence, automated risk prioritization, and enhanced decision transparency, offering a robust solution for next-generation cloud security.
- Research Article
11
- 10.1007/s43681-020-00001-8
- Oct 6, 2020
- AI and Ethics
Ethical and explainable artificial intelligence is an interdisciplinary research area involving computer science, philosophy, logic, and social sciences, etc. For an ethical autonomous system, the ability to justify and explain its decision-making is a crucial aspect of transparency and trustworthiness. This paper takes a Value-Driven Agent (VDA) as an example, explicitly representing implicit knowledge of a machine learning-based autonomous agent and using this formalism to justify and explain the decisions of the agent. For this purpose, we introduce a novel formalism to describe the intrinsic knowledge and solutions of a VDA in each situation. Based on this formalism, we formulate an approach to justify and explain the decision-making process of a VDA, in terms of a typical argumentation formalism, Assumption-based Argumentation (ABA). As a result, a VDA in a given situation is mapped onto an argumentation framework in which arguments are defined by the notion of deduction. Justified actions with respect to semantics from argumentation correspond to solutions of the VDA. The acceptance (rejection) of arguments and their premises in the framework provides an explanation for why an action was selected (or not). Furthermore, we go beyond the existing version of VDA, considering not only practical reasoning, but also epistemic reasoning, such that the inconsistency of knowledge of the VDA can be identified, handled, and explained.
- Research Article
12
- 10.1007/s10115-007-0069-3
- Mar 21, 2007
- Knowledge and Information Systems
Matias Alvarado is currently a Research Scientist at the Centre of Research and Advanced Studies (CINVESTAV-IPN, Mexico). He got a Ph.D. degree in computer science from the Technical University of Catalonia, with a major in artificial intelligence. He received the B.Sc. degree in mathematics from the National Autonomous University of Mexico. His interests in research and technological applications include knowledge management and decision making; autonomous agents and multiagent systems for supply chain disruption management; concurrency control, pattern recognition and computational logic. He is the author of about 50 scientific papers, a Journal Special Issues Guest Editor on topics of artificial intelligence and knowledge management for the oil industry; an academic, invited to the National University of Singapore, Technical University of Catalonia, University of Oxford, University of Utrecht, and Benemerita Universidad Autonoma de Puebla. Leonid Sheremetov received the Ph.D. degree in computer science in 1990 from St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, where he has worked as a Research Fellow and a Senior Research Fellow from 1982. Now he is a Principal Investigator of the Research Program on Applied Mathematics and Computing of the Mexican Petroleum Institute, where he leads the Distributed Intelligent Systems Group, and a part-time professor of the Artificial Intelligence Laboratory of the Centre for Computing Research of the National Polytechnic Institute (CIC-IPN), Mexico. His current research interests include multiagent systems, semantic WEB, decision support systems, and enterprise information integration. His group developed CAPNET agent platform and has been involved in several projects for the energy industry ranging from petroleum exploration and production to knowledge management with special focus on industrial exploitation of agent technology. He is also a member of the Editorial Boards of several journals. Rene Banares-Alcantara has worked in the University of Oxford from October 2003 and is now a Reader in engineering science at the Department of Engineering Science and a Fellow in engineering at New College. He previously held a readership at the University of Edinburgh and lectureships in Spain and at the Universidad Nacional Autonoma de Mexico (UNAM). He obtained his undergraduate degree from UNAM and the M.S. and Ph.D. degrees from Carnegie Mellon University (CMU). Starting with his work at CMU, his research interests have been in the area of process systems engineering, in particular chemical process design and synthesis. He has developed a strong relationship with computer science/artificial intelligence research groups in different universities and research institutes, with current research also linking to social and biological modeling. He has (co)authored more than 100 refereed publications and has been a Principal Investigator and a Researcher in several EPSRC and European Union projects. Francisco Cantu-Ortiz obtained the Ph.D. degree in artificial intelligence from the University of Edinburgh, United Kingdom and the Bachelor's degree in computer systems engineering from the Tecnologico de Monterrey (ITESM), Mexico. He is a Full Professor of artificial intelligence at Tecnologico de Monterey and is also the Dean of research and graduate Studies. He has been the Head of the Center for Artificial Intelligence and of the Informatics Research Center. Dr. Cantu-Ortiz has been the General Chair of about 15 international conferences in artificial intelligence and expert system and was a Local Chair of the International Joint Conference on Artificial Intelligence in 2003. His research interests include knowledge based systems and inference, machine learning, and data mining using Bayesian and statistical techniques for business intelligence, technology management, and entrepreneurial science. More recently, his interests have extended to epistemology and philosophy of science. He was the President of the Mexican Society for Artificial Intelligence and is a member of the IEEE Computer Society and the ACM.
- Research Article
- 10.1017/ash.2023.394
- Jun 1, 2023
- Antimicrobial Stewardship & Healthcare Epidemiology
Background: Using patient data from the electronic health record (EHR) and computer logic, an “electronic phenotype” can be created to identify patients with community-acquired pneumonia (CAP) in real time to assist with syndrome-specific antimicrobial stewardship efforts.1 We adapted and validated the performance of an inpatient CAP electronic phenotype for antimicrobial stewardship interventions. Methods: An automated scoring system was created within the EHR (Epic Systems) to identify hospitalized patients with CAP based on the variables and logic listed in Fig. 1B. We adapted a score used by the Michigan Hospital Medicine Safety Consortium (HMS) to identify patients with CAP, with additions made to improve sensitivity (Fig. 1).1 The score can be displayed in a column within the EHR patient list (Fig. 2). We validated the electronic phenotype via chart review of all hospitalized patients on systemic antimicrobials admitted to a medicine team consecutively between November 8 and 18, 2021. Patients who were readmitted within the validation time frame were excluded. We assessed the performance of the electronic phenotype by comparing the score to manual chart review, where “CAP diagnosis” was defined as (1) mention of “pneumonia” or “CAP” as part of the differential diagnosis in the admission documentation, (2) antimicrobials were started within 48 hours of admission, and (3) radiographic findings were suggestive of pneumonia. After initial evaluation, the scoring system was adjusted, and performance was re-evaluated during prospective audit and feedback performed on EHR CAP–positive patients over 13 days between July 2022 and December 2022. Results: We included 191 patients in our initial validation cohort. The CAP score had high sensitivity (95.83%), specificity (92.2%), and negative predictive value (99.35%), though lower positive predictive value (63.89%) was noted (Table 2). The rules were further refined to include bloodstream infection only with Haemophilus influenza or Streptococcus pneumoniae in rule 2B, and azithromycin was removed from “CAP antibiotics.” After these changes, repeated evaluation of 88 patients with positive CAP EHR score was performed, and only 20 (23%) were considered false-positive results. Conclusions: Electronic phenotypes can be used to create automated tools to identify patients with CAP with reasonable performance. Data from this tool can be used to guide more focused antimicrobial stewardship interventions and clinical decision support in the future. Reference: Vaughn VM, et al. A statewide collaborative quality initiative to improve antibiotic duration and outcomes in patients hospitalized with uncomplicated community-acquired pneumonia. Clin Infect Dis 2022;75:460–467.Disclosures: None
- Research Article
- 10.33395/sinkron.v9i2.14817
- Jun 12, 2025
- Sinkron
Fertilizers are essential in modern agriculture as they supply vital nutrients to plants, enhancing growth and yield. However, selecting the most appropriate fertilizer involves multiple criteria and a diverse range of available options. This study conducts a comparative analysis of two Multi-Criteria Decision-Making (MCDM) methods: the Weighted Sum Model (WSM) and the Weight Product (WP) method, supplemented by WSM-Score and vector-based approaches. The evaluation is based on four criteria price, quality, ease of availability, and fertilizer form across seven alternatives: Urea, Compost, TSP, KCL, Gandasil, NPK, and ZA. Using normalized weights from expert judgment, both methods were used to rank the alternatives. A key contribution of this study is the integration of WSM-Score and vector approaches, which enhance traditional MCDM by improving score comparability (WSM-Score) and enabling geometric interpretation of alternative positioning (vector). Results show that Compost (A2) ranks highest across all methods, indicating convergence despite differences in computational logic. WSM offers ease of interpretation, while WP better accounts for proportional differences but is more sensitive to low-performing criteria. The findings suggest that method selection should be context-dependent. Although the ranking results are consistent, the absence of empirical validation through expert comparison or field data limits the generalizability of the conclusions. Further research should include such validation to strengthen the reliability of MCDM-based decision support systems in agricultural applications.
- Research Article
14
- 10.1016/s0020-7373(88)80034-8
- Feb 1, 1988
- International Journal of Man-Machine Studies
ISIS: the interactive spatial information system
- Research Article
42
- 10.3233/978-1-60750-949-3-164
- Jan 1, 2004
- Studies in health technology and informatics
A major obstacle to sharing computable clinical knowledge is the lack of a common language for specifying expressions and criteria. Such a language could be used to specify decision criteria, formulae, and constraints on data and action. Al-though the Arden Syntax addresses this problem for clinical rules, its generalization to HL7's object-oriented data model is limited. The GELLO Expression language is an object-oriented language used for expressing logical conditions and computations in the GLIF3 (GuideLine Interchange Format, v. 3) guideline modeling language. It has been further developed under the auspices of the HL7 Clinical Decision Support Technical Committee, as a proposed HL7 standard., GELLO is based on the Object Constraint Language (OCL), because it is vendor-independent, object-oriented, and side-effect-free. GELLO expects an object-oriented data model. Although choice of model is arbitrary, standardization is facilitated by ensuring that the data model is compatible with the HL7 Reference Information Model (RIM).
- Book Chapter
6
- 10.1007/978-3-642-10663-7_2
- Jan 1, 2010
Spatial Decision Support Systems (SDSS) are interactive, computer-based systems, designed to support decision makers in achieving a higher effectiveness of decision making while solving a semi-structured spatial decision problem. Current spatial decision support techniques are predominantly based on boolean logic, which makes their expressive power inadequate. In this chapter it is presented how the Logic Scoring of Preference (LSP) method, helps to overcome the inadequacies present in traditional approaches. LSP is well suited to produce so-called dynamic, geographic suitability maps (S-maps), which provide specialised information on the suitability degree of selected geographic regions for a specific purpose. The presented approach is based on soft computing and many-valued logic.
- Research Article
- 10.37648/ijtbm.v14i01.012
- Jan 1, 2024
- International Journal of Transformations in Business Management
In the era of digital transformation, businesses are increasingly relying on intelligent systems to enhance operational efficiency and strategic decision-making. Artificial Intelligence-driven Decision Support Systems (AI-DSS) have emerged as a pivotal innovation, offering advanced capabilities such as predictive analytics, real-time optimization, and adaptive learning. This paper presents a comprehensive study on the development, implementation, and impact of AI-DSS across various business functions. It explores the integration of machine learning (ML), deep learning (DL), natural language processing (NLP), and explainable AI (XAI) in decision support environments, emphasizing how these technologies enable data-driven and agile decision-making. Through a detailed literature review, the paper examines key domains—Supply Chain Management (SCM), Predictive Maintenance (PdM), and Financial Operations—where AI-DSS are reshaping traditional processes. A comparative metrics framework is applied to assess improvements in accuracy, time efficiency, sustainability, explainability, and complexity. Empirical findings reveal that AI-DSS significantly outperform traditional systems, offering up to 20% higher decision accuracy and reducing processing times by as much as 30%. However, challenges such as algorithm aversion, data silos, lack of transparency, and ethical concerns remain critical barriers to adoption. The paper concludes by recommending hybrid human-AI decision frameworks, domain-specific explainability tools, and standardized evaluation benchmarks as pathways to wider adoption. It also identifies future research opportunities in integrating generative AI, digital twins, and process-aware decision support models. This study contributes a structured, empirical, and comparative understanding of AI-DSS and their transformative potential for modern business operations.
- Book Chapter
419
- 10.1007/978-3-030-32236-6_51
- Jan 1, 2019
Deep learning has made significant contribution to the recent progress in artificial intelligence. In comparison to traditional machine learning methods such as decision trees and support vector machines, deep learning methods have achieved substantial improvement in various prediction tasks. However, deep neural networks (DNNs) are comparably weak in explaining their inference processes and final results, and they are typically treated as a black-box by both developers and users. Some people even consider DNNs (deep neural networks) in the current stage rather as alchemy, than as real science. In many real-world applications such as business decision, process optimization, medical diagnosis and investment recommendation, explainability and transparency of our AI systems become particularly essential for their users, for the people who are affected by AI decisions, and furthermore, for the researchers and developers who create the AI solutions. In recent years, the explainability and explainable AI have received increasing attention by both research community and industry. This paper first introduces the history of Explainable AI, starting from expert systems and traditional machine learning approaches to the latest progress in the context of modern deep learning, and then describes the major research areas and the state-of-art approaches in recent years. The paper ends with a discussion on the challenges and future directions.
- Conference Article
- 10.1145/3323873.3325058
- Jun 5, 2019
AI as a concept has been around since the 1950's. With the recent advancements in machine learning technology, and the availability of big data and large computing processing power, the scene is set for AI to be used in many more systems and applications which will profoundly impact society. The current deep learning based AI systems are mostly in black box form and are often non-explainable. Though it has high performance, it is also known to make occasional fatal mistakes. This has limited the applications of AI, especially in mission critical problems such as decision support, command and control, and other life-critical operations. This talk focuses on explainable AI, which holds promise in helping humans to better understand and interpret the decisions made by black-box AI models. Current research efforts towards explainable multimedia AI center on two parts of solution. The first part focuses on better understanding of multimedia content, especially video. This includes dense annotation of video content from not just object recognition, but also relation inference. The relation includes both correlation and causality relations, as well as common sense knowledge. The dense annotation enables us to transform the level of representation of video towards that of language, in the form of relation triplets and relation graphs, and permits in-depth research on flexible descriptions, question-answering and knowledge inference of video content. A large scale video dataset has been created to support this line of research. The second direction focuses on the development of explainable AI models, which are just beginning. Existing works focus on either the intrinsic approach, which designs self-explanatory models, or post-hoc approach, which constructs a second model to interpret the target model. Both approaches have limitations on trade-offs between interpretability and accuracy, and the lack of guarantees about the explanation quality. In addition, there are issues of quality, fairness, robustness and privacy in model interpretation. In this talk, I present current state-of-the arts approaches in explainable multimedia AI, along with our preliminary research on relation inference in videos, as well as leveraging prior domain knowledge, information theoretic principles, and adversarial algorithms to achieving interpretability. I will also discuss future research towards quality, fairness and robustness of interpretable AI.
- Book Chapter
- 10.1007/978-3-032-08333-3_9
- Oct 19, 2025
The impact of explainability on users’ trust in AI has long been debated, with research often hinting that explanations of AI decisions may enhance skepticism. However, our study reveals a paradox: when faced with direct and tangible harm, non-experts continue to trust AI explanations unquestioningly. As evolving EU legislation mandates greater transparency in AI decision-making, it is critical to understand whether explainability truly enables users to detect and challenge flawed decisions. This study examines trust in explainable AI (XAI) through an experiment with 63 non-expert participants who (wrongfully) believed that an AI system was grading their exams. SHAP-like explanations accompanied the decisions, while the experimental group systematically received lower grades to simulate direct harm from simulated AI bias. Unlike prior studies relying on simulated systems, we employed a real-world high-risk use case, academic grading, where AI decisions have concrete consequences. Contrary to expectations, users’ trust levels in AI explanations remained unchanged despite clear evidence of bias, highlighting an unsettling shift from skepticism toward blind trust in XAI. These findings challenge the assumption that explainability fosters critical AI literacy and reveal a growing risk: AI explanations may reinforce misplaced trust instead of increasing caution. This underscores the urgent need to reassess how explainability is designed and whether it empowers users to engage critically with AI decisions.
- Research Article
5
- 10.53759/5181/jebi202303022
- Oct 5, 2023
- Journal of Enterprise and Business Intelligence
The act of decision-making lies at the core of human existence and shapes our interactions with the surrounding environment. This article investigates the utilization of artificial intelligence (AI) techniques in the advancement of intelligent decision support systems (IDSS). It builds upon prior research conducted in the decision-making field and the subsequent development of decision support systems (DSS) based on that knowledge. The initial establishment of the fundamental principles of classical DSS is undertaken. The subsequent emphasis is directed towards the integration of artificial intelligence techniques within IDSS. The evaluation of an IDSS, as well as any other DSS, is a crucial undertaking in order to gain insights into the system's capabilities and identify areas that require enhancement. This article presents a review conducted on this significant yet insufficiently investigated subject matter. When utilized in conjunction with DSS, AI techniques such as intelligent agents, artificial neural networks (ANN), evolutionary computing, case-based reasoning, and fuzzy logic provide valuable assistance in defining complex practical challenges, which are mostly time-critical, encompass extensive and scattered data, and can derive advantages from sophisticated reasoning.
- Research Article
11
- 10.1002/cpe.5491
- Sep 1, 2019
- Concurrency and Computation: Practice and Experience
In the fourth industrial revolution, various inter-mediation such as theories, techniques, or implementation would be used. It contains computational intelligence, applied soft computing and fuzzy logic and artificial neural networks, intelligent contents security, model driven architecture and meta-modeling, multimedia contents processing and retrieval, vehicular N/W, big data, intelligence information processing, convergence/complex contents, smart learning, intelligent contents design management, methodology and design theory, intelligent media contents convergence/complex media, social media and collective intelligence, and social media big data analytic. To consider these interesting and significant issues which have related to the fast developing area of smart media and application, the editors have edited this special issue Smart Media and Application, and finally, it selected eleven papers. In Motion data acquisition method for motion analysis in golf by Hwang et al,1 it studied the process a system that obtains the same motion as the actual one by acquiring information on the motion of the subject using 15 inertial sensors, and using the information on the actual joint length and initial direction of the subject extracted from a depth camera. As a result, it shows that measurement error of information on the joint length and the foot stance of subjects extracted through the depth camera ranges from a minimum of 4.4% to a maximum of 6.94%. In NSCT domain–based secure multiple watermarking technique through lightweight encryption for medical images, which is written by Thakur et al,2 it defined non-subsampled contourlet transform (NSCT) domain, in which the proposed scheme first partitions the host image into sub-components and then calculates the entropy values. In Environmental monitoring system for intelligent stations by Li et al,3 it proposed the design of the high-speed rail station environmental monitoring system based on the LoRa communication technology, and it aims to enable the real-time monitoring of temperature, humidity, illuminance, and noise decibels in high-speed railway stations. Efficient dummy generation for considering obstacles and protecting user location is by Song et al.4 This paper was to study an efficient dummy creating techniques to improve user privacy protection, and as a result, it showed that it improved on other recent techniques. Key node selection based on a genetic algorithm for fast patching in social networks by Kim et al5 provided considerable amounts of personal information and shares this information with friends without space-time limitations. It tried to improve the patch propagation speed; it was important to select key nodes that are the starting points of the patch process. In this paper, the authors proposed a key node selection scheme based on a genetic algorithm to find the most significant contribution nodes of the patch propagation. Following the simulation result, it showed the proposed scheme propagates patches more rapidly than the existing one. In Effective computer-assisted pronunciation training based on phone-sensitive word recommendation by Jo et al,6 they proposed a computer-assisted pronunciation training system targeting unacceptable pronunciation due to confusion among contextual allophones, a problem that often emerges from phoneme-based feedback provided during pronunciation training. Finally, it showed that the proposed system results in the improvement of pronunciation skills through training sessions that use recommended words containing phoneme pairs that were initially pronounced incorrectly. In Dataset retrieval system based on automation of data preparation with dataset description model by Mun et al,7 it tried to overcome the problem by proposing a dataset description model that can express the requirements for data processing and dataset retrieval system based on automated data preparation and it could possibly provide good quality datasets for statistical learning applications using data preparation methods such as data acquisition, refinement, and organization. In An enhanced 3DCNN-ConvLSTM for spatiotemporal multimedia data analysis by Wang et al,8 it used 3DCNN in CNN part and ConvLSTM in RNN part. In A web-based group decision support system for multicriteria problems written by Conceição et al,9 it used a multiagent system to combine and process this information, using virtual agents that represent each decision-marker. The high level of usability that the system provides will contribute to an easier acceptance and adoption of this kind of systems. In Intelligent and semantic threshold schemes for security in cloud computing by Ogiela and Snasel,10 it tried to enhanced with new classes of solutions, including semantic protocols of secret sharing, and it is subjected to the process of splitting and hiding that could take various forms including that of services. In Estimation of a physical activity energy expenditure with a patch-type sensor module using artificial neural network by Kang et al,11 it proposed the most accurate method using a wireless patch-type sensor to predict the energy expenditure of physical activities. Through the optimization of the prediction of energy expenditure of physical activities using the neural network algorithm, it achieved RMSE of 0.1893, R2 of 0.91 for the energy expenditures of aerobic and anaerobic exercises. We would like to thank you our sincere appreciation of the valuable contributions to all authors.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.