Getting to Digital
It has been over 40 years since archives began to automate their processes and digitize content. During that period, archival practices and ideas have experienced unprecedented development, archives have established substantial digital presences and records creators have shifted almost completely to digital production. Today archives are being looked to by Artificial Intelligence developers and scholars alike as one of the few remaining sources of big data, both for training Large Language Models and for surfacing new knowledge about the past and humanity. And yet archives have yet to reach “full” digital capacity and fluency, and not simply because of resource limitations. One might ask whether they ever will – or even should? This lecture looks back on the history of digital developments in archives, examining the motivations and challenges arising at different points. It contemplates why and how the goalposts have repeatedly moved, and how the questions that have arisen in archival decision-making have changed as a consequence and how they may continue to do so as we look to the future nature, roles and practices of archives.
- Research Article
- 10.54808/jsci.23.07.29
- Dec 1, 2025
- Journal of Systemics, Cybernetics and Informatics
Artificial intelligence (AI) creators and developers attempt to simulate human thinking and our environment (e.g., popular web-based Second Life), but, controversially, claim to seek replicating the human brain (e.g., Human Brain Project) and what it does. However, some central questions are, "Who are we? Are we truly who we believe we are? What is thinking/mentation/ideas? Why is our world rife with contradictions and conflict? What is reality, itself?" Experts, like David Chalmers, refer to "consciousness" as the "hard problem". If we don't fully understand these concepts in humans, how can we possibly recreate them in AI? It's a puzzle, much like the challenge of defining and creating life itself. Answers are to be founded on what exists, not merely our desires. For replicating humans, AI developers must confront the difference between belief and authentic self. Beliefs can mask the true self. A major flaw of the well-known Turing Test—which assesses whether a machine can imitate human intelligence—is that it cannot verify whether someone's beliefs are reflected in their actions. AI developers must be competent technicians but integrate philosophy, thus addressing overlapping questions of meaning, ethics, purpose, and ethos. Even AI creators acknowledge AI could threaten humanity. If future technology integrates self-awareness or subjective experience into advanced computing systems, we will need to revisit some ancient wisdom, "Know thyself." Any viable human identity probe (such as Authentic Systems) must be underpinned by philosophy, thus revealing the extent to which one has internalized belief by action. One the material and psychological aspects of authentic identity are know, we apply the unity of opposites law to establish authentic human identity.
- Research Article
2
- 10.1609/aiide.v11i1.12795
- Nov 19, 2015
- Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment
Hand-coded finite-state machines and behavior trees are the go-to techniques for artificial intelligence (AI) developers that want full control over their character's bearing. However, manually crafting behaviors for computer-controlled agents is a tedious and parameter-dependent task. From a high-level view, the process of designing agent AI by hand usually starts with the determination of a suitable set of action sequences. Once the AI developer has identified these sequences he merges them into a complete behavior by specifying appropriate transitions between them. Automated techniques, such as learning, tree search and planning, are on the other end of the AI toolset's spectrum. They do not require the manual definition of action sequences and adapt to parameter changes automatically. Yet AI developers are reluctant to incorporate them in games because of their performance footprint and lack of immediate designer control. We propose a method that, given the symbolic definition of a problem domain, can automatically extract a transparent behavior model from Goal-Oriented Action Planning (GOAP). The method first observes the behavior exhibited by GOAP in a Monte-Carlo simulation and then evolves a suitable behavior tree using a genetic algorithm. The generated behavior trees are comprehensible, refinable and as performant as hand-crafted ones.
- Research Article
18
- 10.1007/s44163-023-00074-4
- Jul 17, 2023
- Discover Artificial Intelligence
Artificial intelligence (AI) has rapidly become one of the technologies used for competitive advantage. However, there are also growing concerns about bias in AI models as AI developers risk introducing bias both unintentionally and intentionally. This study, using a qualitative approach, investigated how AI developers can contribute to the development of fair AI models. The key findings reveal that the risk of bias is mainly because of the lack of gender and social diversity in AI development teams, and haste from AI managers to deliver much-anticipated results. The integrity of AI developers is also critical as they may conceal bias from management and other AI stakeholders. The testing phase before model deployment risks bias because it is rarely representative of the diverse societal groups that may be affected. The study makes practical recommendations in four main areas: governance, social, technical, and training and development processes. Responsible organisations need to take deliberate actions to ensure that their AI developers adhere to fair processes when developing AI; AI developers must prioritise ethical considerations and consider the impact their models may have on society; partnerships between AI developers, AI stakeholders, and society that might be impacted by AI models should be established; and AI developers need to prioritise transparency and explainability in their models while ensuring adequate testing for bias and corrective measures before deployment. Emotional intelligence training should also be provided to the AI developers to help them engage in productive conversations with individuals outside the development team.
- Research Article
- 10.59141/japendi.v6i7.8367
- Jul 16, 2025
- Jurnal Pendidikan Indonesia
Telemedicine as part of digital health services is officially regulated in Law No. 17 of 2023 concerning Health and implemented through Government Regulation No. 28 of 2024, but has not yet discussed the role of artificial intelligence as a prescription issuing entity. Meanwhile, Law No. 1 of 2024 concerning Electronic Information and Transactions recognizes the validity of electronic documents and digital signatures, opening up legal opportunities for the digitization of medical files. The aim of this study was to evaluate the legal feasibility of automated prescribing by artificial intelligence systems in Telemedicine, in particular reviewing the void of norms governing the "signer" status of prescriptions by artificial intelligence. The research method used is normative legal research; includes the study of national legislative texts, the interpretation of key articles, as well as comparisons with international practices from the FDA (US) and EMA (European Union). The analysis shows that the current national legal regime only recognizes licensed doctors as the authorized parties to sign electronic prescriptions, so artificial intelligence can only function as a "Clinical Decision Support System" without the legal right to issue prescriptions independently. The results of the study also highlight the legal risks for artificial intelligence platform organizers and developers if automatic prescriptions are not verified by medical personnel, including potential malpractice lawsuits and violations of the Consumer Protection Law. In conclusion, to realize the issuance of prescriptions carried out by artificial intelligence, it is necessary to amend the Health Law and/or sectoral regulations that formalize algorithm certification standards, periodic audit mechanisms, and a scheme for the division of legal responsibilities between artificial intelligence developers, platform providers, and supervising doctors.
- Front Matter
4
- 10.1016/s2589-7500(22)00068-1
- Apr 5, 2022
- The Lancet Digital Health
In this issue of The Lancet Digital Health, Xiaoxuan Liu and colleagues give their perspective on global auditing of medical artificial intelligence (AI). They call for the focus to shift from demonstrating the strengths of AI in health care to proactively discovering its weaknesses. Machines make unpredictable mistakes in medicine, which differ significantly from those made by humans. Liu and colleagues state that errors made by AI tools can have far-reaching consequences because of the complex and opaque relationships between the analysis and the clinical output. Given that there is little human control over how an AI generates results and that clinical knowledge is not a prerequisite in AI development, there is a risk of an AI learning spurious correlations that seem valid during training but are unreliable when applied to real-world situations. Lauren Oakden-Rayner and colleagues analysed the performance of an AI across a range of relevant features for hip fracture detection. This preclinical algorithmic audit identified barriers to clinical use, including a decrease in sensitivity at the prespecified operating point. This study highlighted several “failure modes”, which is the propensity of an AI to fail recurrently in certain conditions. Oakden-Rayner told The Lancet Digital Health that their study showed that “the failure modes of AI systems can look bizarre from a human perspective. Take, for example, in the hip fracture audit (figure 5), the recognition that the AI missed an extremely displaced fracture … the sort of image even a lay person would recognise as completely abnormal.” These errors can drastically affect clinician and patient trust in AI. Another example demonstrating the need for auditing was highlighted last month in an investigation by STAT and the Massachusetts Institute of Technology, which found that an EPIC health algorithm used to predict sepsis risk in the USA deteriorated sharply in performance, from 0·73 AUC to 0·53 AUC, over 10 years. This deterioration over time was caused by changes in the hospital coding system, increased diversity and volume of patient data, and changes in operational behaviours of caregivers. There was little to no oversight of the AI tool once it hit the market, potentially causing harm to patients in hospital. Liu commented, “without the ability to observe and learn from algorithmic errors, the risk is that it will continue to happen and there's no accountability for any harm that results.” Auditing medical AI is essential; but whose responsibility is it to ensure that AI is safe to use? Some experts think that AI developers are responsible for providing guidance on managing their tools, including how and when to check the system's performance, and identifying vulnerabilities that might emerge after they are put into practice. Others argue that not all the responsibility lies with AI developers, and health providers must test AI models on other data to verify their utility and assess potential vulnerabilities. Liu says, “we need clinical teams to start playing an active role in algorithmic safety oversight. They are best placed to define what success and failure looks like for their health institution and their patient cohort.” There are three challenges to overcome to ensure AI auditing is successfully implemented. First, in practice, auditing will require professionals with clinical and technical expertise to investigate and prevent AI errors and to thoughtfully interrogate errors before and during real-world deployment. However, experts with computational and clinical skill sets are not yet commonplace. Health-care institutes, AI companies, and governments must invest in upskilling health-care workers so that these experts can become an integral part of the medical AI development process. Second, industry-wide standards for monitoring medical AI tools over time must be enforced by key regulatory bodies. Tools to identify when an algorithm becomes miscalibrated because of changes in data or environment are being developed by researchers, but these tools must be endorsed in a sustained and standardised way, led by regulators, health systems, and AI developers. Third, the main issue that can exacerbate errors in AI is the lack of transparency of the data, code, and parameters due to intellectual property concerns. Liu and colleagues emphasise that much of the benefit that software and data access would provide can be instead obtained through a web portal with the ability to test the model on new data and receive model outputs. Oakden-Rayner said, “AI developers have a responsibility to make auditing easier for clinicians, especially by providing clear details of how their system works and how it was built.” The medical algorithmic auditArtificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. Full-Text PDF Open AccessValidation and algorithmic audit of a deep learning system for the detection of proximal femoral fractures in patients in the emergency department: a diagnostic accuracy studyThe model outperformed the radiologists tested and maintained performance on external validation, but showed several unexpected limitations during further testing. Thorough preclinical evaluation of artificial intelligence models, including algorithmic auditing, can reveal unexpected and potentially harmful behaviour even in high-performance artificial intelligence systems, which can inform future clinical testing and deployment decisions. Full-Text PDF Open Access
- Research Article
5
- 10.1016/j.dim.2024.100082
- Jun 1, 2025
- Data and Information Management
Trustworthy AI: AI developers’ lens to implementation challenges and opportunities
- Research Article
2
- 10.1007/s12599-024-00914-2
- Jan 5, 2025
- Business & Information Systems Engineering
The increasing proliferation of artificial intelligence (AI) systems presents new challenges for the future of information systems (IS) development, especially in terms of holding stakeholders accountable for the development and impacts of AI systems. However, current governance tools and methods in IS development, such as AI principles or audits, are often criticized for their ineffectiveness in influencing AI developers’ attitudes and perceptions. Drawing on construal level theory and Toulmin’s model of argumentation, this paper employed a sequential mixed method approach to integrate insights from a randomized online experiment (Study 1) and qualitative interviews (Study 2). This combined approach helped us investigate how different types of accountability arguments affect AI developers’ accountability perceptions. In the online experiment, process accountability arguments were found to be more effective than outcome accountability arguments in enhancing AI developers’ perceived accountability. However, when supported by evidence, both types of accountability arguments prove to be similarly effective. The qualitative study corroborates and complements the quantitative study’s conclusions, revealing that process and outcome accountability emerge as distinct theoretical constructs in AI systems development. The interviews also highlight critical organizational and individual boundary conditions that shape how AI developers perceive their accountability. Together, the results contribute to IS research on algorithmic accountability and IS development by revealing the distinct nature of process and outcome accountability while demonstrating the effectiveness of tailored arguments as governance tools and methods in AI systems development.
- Research Article
1
- 10.5465/ambpp.2022.11169abstract
- Aug 1, 2022
- Academy of Management Proceedings
Artificial Intelligence (AI) may change work, management, and societies in the future. This study questions how AI developers consider the potential consequences of their work. It proposes an imagined futures perspective to understand how AI developers imagine the futures associated with AI and how these imaginations shape their work and practices. It examines qualitatively the case of some AI developers and their work and find that they consider the future consequences of the AI they participate in developing as tangential – i.e., loosely connected to what they do - or integral – i.e., closely associated with what they do - to their work. These imaginations of the future are in tension, prompting some AI developers to work at connecting them as they adjust how they view the future and their work. This study adds to scholarship by revealing how AI development relies upon particular imaginations of the future, by illuminating how practitioners engage speculatively with the future, and by explaining the importance for Information Technology (IT) development of developers’ answers to what their work may do in the future.
- Research Article
17
- 10.2196/41089
- Jun 22, 2023
- Journal of Medical Internet Research
Resources are increasingly spent on artificial intelligence (AI) solutions for medical applications aiming to improve diagnosis, treatment, and prevention of diseases. While the need for transparency and reduction of bias in data and algorithm development has been addressed in past studies, little is known about the knowledge and perception of bias among AI developers. This study's objective was to survey AI specialists in health care to investigate developers' perceptions of bias in AI algorithms for health care applications and their awareness and use of preventative measures. A web-based survey was provided in both German and English language, comprising a maximum of 41 questions using branching logic within the REDCap web application. Only the results of participants with experience in the field of medical AI applications and complete questionnaires were included for analysis. Demographic data, technical expertise, and perceptions of fairness, as well as knowledge of biases in AI, were analyzed, and variations among gender, age, and work environment were assessed. A total of 151 AI specialists completed the web-based survey. The median age was 30 (IQR 26-39) years, and 67% (101/151) of respondents were male. One-third rated their AI development projects as fair (47/151, 31%) or moderately fair (51/151, 34%), 12% (18/151) reported their AI to be barely fair, and 1% (2/151) not fair at all. One participant identifying as diverse rated AI developments as barely fair, and among the 2 undefined gender participants, AI developments were rated as barely fair or moderately fair, respectively. Reasons for biases selected by respondents were lack of fair data (90/132, 68%), guidelines or recommendations (65/132, 49%), or knowledge (60/132, 45%). Half of the respondents worked with image data (83/151, 55%) from 1 center only (76/151, 50%), and 35% (53/151) worked with national data exclusively. This study shows that the perception of biases in AI overall is moderately fair. Gender minorities did not once rate their AI development as fair or very fair. Therefore, further studies need to focus on minorities and women and their perceptions of AI. The results highlight the need to strengthen knowledge about bias in AI and provide guidelines on preventing biases in AI health care applications.
- Research Article
- 10.18421/tem143-55
- Aug 27, 2025
- TEM Journal
The ethical use of Artificial Intelligence (AI) has become a critical concern since AI systems influence various sectors of society. While AI technologies such as generative AI continue to advance, more AI ethics principles frameworks should be investigated to guide both users and the developers of AI systems. This research highlights the importance of understanding the AI developers' awareness of ethical principles when implementing AI technologies. In this paper, the focus is on AI developers in the academic community. Developers have the role of preparing the new generations of AI experts and equipping them with concrete knowledge before delving into the industry. Based on empirical data from 30 AI developers in academia, the research findings showed a reasonable understanding of AI ethics in general. However, principles such as privacy by design, security by design, and the ability to appeal require more study and a clear framework to guide. Additionally, the resulting challenges were categorized into five areas: (1) Resource and Regulatory Challenges, (2) Accountability and Responsibility Issues, (3) Technical Complexity, (4) Ethical Human Value Conflicts, (5) Cultural and Institutional Barriers. Suggestions for future research directions were also provided to help academia support AI developers in incorporating AI ethics.
- Research Article
1
- 10.1007/s43681-024-00535-1
- Sep 2, 2024
- AI and Ethics
The prevalence of artificial intelligence (AI) tools has inspired social studies researchers, ethicists, and policymakers to seriously examine AI’s sociopolitical and ethical impacts. AI ethics literature provides guidance on which ethical principles to implement via AI governance; AI auditing literature, especially ethics-based auditing (EBA), suggests methods to verify if such principles are respected in AI model development and deployment. As much as EBA methods are abundant, I argue that most currently take a top-down and post-hoc approach to AI model development: Existing EBA methods mostly assume a preset of high-level, abstract principles that can be applied universally across contexts; meanwhile, current EBA is only conducted after the development or deployment of AI models. Taken together, these methods do not sufficiently capture the very developmental practices surrounding the constitution of AI models on a day-to-day basis. What goes on in an AI development space and the very developers whose hands write codes, assemble datasets, and design model architectures remain unobserved and, therefore, uncontested. I attempt to address this lack of documentation on AI developers’ day-to-day practices by conducting an ethnographic “AI lab study” (termed by Florian Jaton), demonstrating just how much context and empirical data can be excavated to support a whole-picture evaluation of AI models’ sociopolitical and ethical impacts. I then propose a new method to be added to the arsenal of EBA: Ethnographic audit trails (EATs), which take a bottom-up and in-progress approach to AI model development, capturing the previously unobservable developer practices.
- Research Article
10
- 10.1007/s43681-021-00120-w
- Dec 8, 2021
- AI and Ethics
While the demand for ethical artificial intelligence (AI) systems increases, the number of unethical uses of AI accelerates, even though there is no shortage of ethical guidelines. We argue that a possible underlying cause for this is that AI developers face a social dilemma in AI development ethics, preventing the widespread adaptation of ethical best practices. We define the social dilemma for AI development and describe why the current crisis in AI development ethics cannot be solved without relieving AI developers of their social dilemma. We argue that AI development must be professionalised to overcome the social dilemma, and discuss how medicine can be used as a template in this process.
- Research Article
2
- 10.3389/frai.2025.1525937
- May 27, 2025
- Frontiers in Artificial Intelligence
BackgroundThe integration of Artificial Intelligence (AI) in nephrology has raised concerns regarding bias, fairness, and ethical decision-making, particularly in the context of Diversity, Equity, and Inclusion (DEI). AI-driven models, including Large Language Models (LLMs) like ChatGPT, may unintentionally reinforce existing disparities in patient care and workforce recruitment. This study investigates how AI models (ChatGPT 3.5 and 4.0) handle DEI-related ethical considerations in nephrology, highlighting the need for improved regulatory oversight to ensure equitable AI deployment.MethodsThe study was conducted in March 2024 using ChatGPT 3.5 and 4.0. Eighty simulated cases were developed to assess ChatGPT’s decision-making across diverse nephrology topics. ChatGPT was instructed to respond to questions considering factors such as age, sex, gender identity, race, ethnicity, religion, cultural beliefs, socioeconomic status, education level, family structure, employment, insurance, geographic location, disability, mental health, language proficiency, and technology access.ResultsChatGPT 3.5 provided a response to all scenario questions and did not refuse to make decisions under any circumstances. This contradicts the essential DEI principle of avoiding decisions based on potentially discriminatory criteria. In contrast, ChatGPT 4.0 declined to make decisions based on potentially discriminatory criteria in 13 (16.3%) scenarios during the first round and in 5 (6.3%) during the second round.ConclusionWhile ChatGPT 4.0 shows improvement in ethical AI decision-making, its limited recognition of bias and DEI considerations underscores the need for robust AI regulatory frameworks in nephrology. AI governance must incorporate structured DEI guidelines, ongoing bias detection mechanisms, and ethical oversight to prevent AI-driven disparities in clinical practice and workforce recruitment. This study emphasizes the importance of transparency, fairness, and inclusivity in AI development, calling for collaborative efforts between AI developers, nephrologists, policymakers, and patient communities to ensure AI serves as an equitable tool in nephrology.
- Research Article
16
- 10.58600/eurjther1719
- Jul 22, 2023
- European Journal of Therapeutics
A few weeks ago, we published an editorial discussion on whether artificial intelligence applications should be authors of academic articles [1] . We were delighted to receive more than one interesting reply letter to this editorial in a short time [2, 3] . We hope that opinions on this
- Research Article
- 10.1002/jid.4007
- Jun 25, 2025
- Journal of International Development
ABSTRACTThe advancement of artificial intelligence (AI) poses significant challenges regarding copyright risk and governance. This research seeks to examine the copyright risks and governance challenges associated with AI development in China. This research employed a convergent parallel mixed‐methods design, integrating structured surveys (n = 461) and semistructured interviews (n = 78) targeting AI developers, legal experts, regulators and copyright holders in China. China was selected as the focus of this study due to its rapid advancements in AI development, evolving regulatory landscape and growing global influence in shaping digital governance norms. Quantitative data were analysed using the Statistical Package for the Social Sciences (SPSS) software for descriptive and inferential statistics, while qualitative data underwent thematic analysis using NVivo software. The findings showed that AI‐powered copyright detection systems play an important role in ensuring fair compensation and protecting intellectual property rights. This study of copyright risk and governance challenges inherent in AI creation contributes novel insight by analysing collaborative governance (CG) in AI accessibility, AI copyright detection to ensure fair compensation for content creators, integrating copyright in responsible AI development and blockchain‐based copyright management. The study recommends that clear copyright regulations and CG mechanisms address the issues of AI creation and emphasize proactive measures to reduce risks, promote innovation and uphold ethical and responsible development in the evolving landscape of AI.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.