AI ethics for the everyday intensivist.
AI ethics for the everyday intensivist.
- # Artificial Intelligence
- # Australian Intensive Care Units
- # Implementation Of Artificial Intelligence
- # Artificial Intelligence's Benefits
- # Artificial Intelligence Ethics
- # Artificial Intelligence Decision-making
- # Unintended Harms
- # Critical Care Settings
- # Disadvantaged Populations
- # Improve Patient Outcomes
- Research Article
207
- 10.1016/s2589-7500(21)00132-1
- Aug 23, 2021
- The Lancet Digital Health
Artificial intelligence (AI) promises to change health care, with some studies showing proof of concept of a provider-level performance in various medical specialties. However, there are many barriers to implementing AI, including patient acceptance and understanding of AI. Patients' attitudes toward AI are not well understood. We systematically reviewed the literature on patient and general public attitudes toward clinical AI (either hypothetical or realised), including quantitative, qualitative, and mixed methods original research articles. We searched biomedical and computational databases from Jan 1, 2000, to Sept 28, 2020, and screened 2590 articles, 23 of which met our inclusion criteria. Studies were heterogeneous regarding the study population, study design, and the field and type of AI under study. Six (26%) studies assessed currently available or soon-to-be available AI tools, whereas 17 (74%) assessed hypothetical or broadly defined AI. The quality of the methods of these studies was mixed, with a frequent issue of selection bias. Overall, patients and the general public conveyed positive attitudes toward AI but had many reservations and preferred human supervision. We summarise our findings in six themes: AI concept, AI acceptability, AI relationship with humans, AI development and implementation, AI strengths and benefits, and AI weaknesses and risks. We suggest guidance for future studies, with the goal of supporting the safe, equitable, and patient-centred implementation of clinical AI.
- Research Article
9
- 10.17705/1pais.14602
- Jan 1, 2022
- Pacific Asia Journal of the Association for Information Systems
Background: With growth in Artificial Intelligence (AI) adoption, challenges and hurdles are also becoming evident. Organizations implementing AI are challenged to find ways to leverage AI to produce optimum results and benefits for the organization. Understanding other organizations’ AI implementation journeys will help them start and implement AI. By understanding the different facets of AI implementation, they can strategize AI to gain business value. Though several studies have examined AI adoption, there are few studies on how firms implement it. We close this gap by studying AI adoption and implementations in various firms. Method: Using a qualitative approach of semi-structured interviews, we studied twenty global organizations of various sizes that have implemented AI. Results: The study categorizes the results into four major themes – facilitators, barriers, trends, and strategies for implementing AI. Our study reinforces the relevance of the TOE framework and Roger’s DOI theory in studying AI adoption. Organizational factors such as top management support, strategic roadmap, availability of skilled resources, and corporate culture influenced AI adoption. Their lack of data or poor data quality is a primary challenge. The privacy laws concerning data, as well as regulatory bottlenecks, further exacerbate this problem. We also identified and mapped the standard AI implementations to their AI technologies. We found that most of them exploit AI’s image and natural language processing capabilities to automate their processes. Regarding implementation, firms work with partners to obtain customer data and use federated learning. Conclusion: Understanding firms’ AI implementation journey will help us promote further adoption and experimentation. Organizations can identify areas where they can leverage AI to enhance value, prepare themselves for the future, start and proceed with AI implementation efforts and overcome barriers they might encounter.
- Research Article
19
- 10.1259/bjro.20230033
- Jun 30, 2023
- BJR|Open
Artificial intelligence (AI) has transitioned from the lab to the bedside, and it is increasingly being used in healthcare. Radiology and Radiography are on the frontline of AI implementation, because of the use of big data for medical imaging and diagnosis for different patient groups. Safe and effective AI implementation requires that responsible and ethical practices are upheld by all key stakeholders, that there is harmonious collaboration between different professional groups, and customised educational provisions for all involved. This paper outlines key principles of ethical and responsible AI, highlights recent educational initiatives for clinical practitioners and discusses the synergies between all medical imaging professionals as they prepare for the digital future in Europe. Responsible and ethical AI is vital to enhance a culture of safety and trust for healthcare professionals and patients alike. Educational and training provisions for medical imaging professionals on AI is central to the understanding of basic AI principles and applications and there are many offerings currently in Europe. Education can facilitate the transparency of AI tools, but more formalised, university-led training is needed to ensure the academic scrutiny, appropriate pedagogy, multidisciplinarity and customisation to the learners' unique needs are being adhered to. As radiographers and radiologists work together and with other professionals to understand and harness the benefits of AI in medical imaging, it becomes clear that they are faced with the same challenges and that they have the same needs. The digital future belongs to multidisciplinary teams that work seamlessly together, learn together, manage risk collectively and collaborate for the benefit of the patients they serve.
- Book Chapter
- 10.4018/979-8-3693-1495-1.ch007
- Oct 25, 2024
The rapid advancement of artificial intelligence (AI) presents significant opportunities and challenges for entrepreneurship. This chapter explores the intersection of ethical practices and AI innovation, focusing on how to balance technological progress with responsible AI development. Key topics include defining ethical AI principles, addressing ethical challenges in AI implementation, and building responsible AI practices. The role of various stakeholders, such as developers, organizations, and regulators, is examined to highlight their contributions to ensuring ethical AI use. Additionally, future directions for AI ethics, including advancements in explainable AI and evolving regulatory frameworks, are discussed. By integrating ethical considerations into AI development, entrepreneurs can drive innovation while upholding principles of fairness, transparency, and accountability.
- Research Article
- 10.1016/j.jcrc.2025.155262
- Feb 1, 2026
- Journal of critical care
AI in critical care: A roadmap to the future.
- Research Article
7
- 10.60154/jaepp.2024.v25n1p67
- Mar 15, 2024
- Journal of Accounting, Ethics & Public Policy
The rapid advancement of artificial intelligence (AI) has revolutionized the accounting profession, automating tasks, identifying patterns, and improving accuracy. However, the increasing reliance on AI raises ethical concerns regarding privacy, bias, transparency, and accountability. This research paper delves into the ethical considerations of AI implementation in accounting practices.Thepaper begins by examining the potential benefits of AI in accounting, highlighting its ability to streamline operations, enhance efficiency, and reduce errors. However, it also acknowledges the ethical risks associated with AI, including data privacy breaches, biased decision-making, lack of transparency, and accountability issues.The paper proposes a framework for responsible AI implementation in accounting to address these ethical concerns. The framework emphasizes establishing clear ethical guidelines,ensuring data privacy and security, mitigating AI algorithms' bias, promoting AI decisionmaking transparency, and establishing accountability mechanisms.The paper further explores the role of accountants in addressing AI ethics. Accountants are responsible for upholding ethical standards and ensuring that AI systems are used responsibly and ethically. They must be aware of the ethical implications of AI and have the knowledge and skills to mitigate ethical risks.In conclusion, the paper emphasizes the need for a proactive approach to AI ethics in accounting. By establishing clear ethical guidelines, promoting responsible AI implementation, and empowering accountants with ethical knowledge and skills, the accounting profession can harness the potential of AI while upholding ethical principles and safeguarding public trust.
- Research Article
6
- 10.1136/bmjhci-2024-101052
- Apr 1, 2024
- BMJ Health & Care Informatics
ObjectivesTo explore the views of intensive care professionals in high-income countries (HICs) and lower-to-middle-income countries (LMICs) regarding the use and implementation of artificial intelligence (AI) technologies in intensive care units...
- Research Article
- 10.46729/ijstm.v6i3.1217
- Jun 1, 2025
- International Journal of Science, Technology & Management
In the last five years, the technology adoption in Indonesia has begun to use advances in artificial intelligence (AI) ethics to improve healthcare services. This change has had a significant impact on several institutions, especially the hospital industry. This paper provides an overview of hospital institutions in Indonesia that are implementing AI ethics. AI ethics comprehensive review of 54 papers from the Scopus, PubMed and Google Scholar database was used to develop our methodology. The existing literatures, which includes studies from various disciplines such as education, healthcare, information communication technology (ICT), licensing, law, hospitality, and economic services, demonstrated the widespread implementation of AI in these fields. We have found potentiality benefit of AI implementation in Indonesian hospital which focusing on increasing patient outcomes and also equalizing of healthcare service. This output can be done with find out the strategy to maximizing its benefit and paralely to decrease and minimizing the rise of ethic risk. This review concludes that AI implementation in Indonesian Hospital come with significantly opportunity for increasing patient healthcare outcome and equality of healthcare services. We provide a new view for organizing governance research, that identifies gaps in the existing literature speciality in healthcare and suggests future directions, for research utilizing technology in AI ethics.
- Research Article
- 10.70382/hujcer.v7i8.012
- Mar 16, 2025
- Journal of Contemporary Education Research
This study examines the impact of Artificial Intelligence (AI) tools on student initiative and academic laziness among education undergraduate students at the University of Ilorin. A descriptive survey design employing a mixed-methods approach was used. The study population consisted of undergraduate students in the Faculty of Education, with a target population of education students. A total of 377 participants were selected through simple random and convenience sampling techniques. Data collection involved a structured questionnaire and qualitative interviews. The instrument's validity was ensured through expert review, and reliability was confirmed with a Cronbach's alpha coefficient of 0.81. Results indicated that 49.1% of students predominantly use AI tools, while 64.5% demonstrated high reliance, highlighting AI’s integration into academic routines. Despite AI's benefits in facilitating learning and efficiency, concerns regarding its overuse reducing critical thinking and initiative were identified. Gender analysis showed no significant difference in AI reliance (p = 0.32), suggesting universal adoption across demographics. The study concludes that AI tools enhance accessibility and efficiency but may undermine students’ problem-solving abilities if excessively relied upon. Recommendations include integrating critical thinking and ethical AI use into the curriculum, promoting balanced AI use among students, and establishing institutional policies to guide ethical and effective AI applications. The significance of this study lies in its potential to inform educational institutions, policymakers, and technology developers about strategies for responsible AI integration in academic settings, ensuring students benefit from AI tools without compromising intellectual independence.
- Research Article
1
- 10.52783/pst.469
- Jun 8, 2024
- Power System Technology
Employee performance evaluation is a crucial process in human resource management. It measures an individual's contribution to organizational goals. However, traditional evaluation methods face obstacles like subjective bias, inefficiency, and lack of objectivity. Artificial Intelligence (AI) technology offers a promising solution. This paper discusses AI's implementation as an evaluation tool and its impact on human resource development. Previous research shows that AI improves objectivity, fairness, and efficiency in appraisal. It accurately identifies employee potential, aiding targeted development programs. However, research gaps remain, such as AI's use in different industries and ethical concerns affecting employees and organizational culture. This study aims to investigate AI's use in various industry contexts, understand ethical and trust aspects, and analyze its impact on employees and organizational culture. The results will provide valuable insights into AI's benefits in performance evaluation, benefiting human resource development and improving the evaluation process. Organizational understanding of AI's challenges and benefits in human resource development can enhance overall productivity and performance.
- Research Article
311
- 10.4018/jdm.2020040105
- Apr 1, 2020
- Journal of Database Management
Artificial intelligence (AI)-based technology has achieved many great things, such as facial recognition, medical diagnosis, and self-driving cars. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the low-level of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks for users, developers, humanity, and societies. As AI advances, one critical issue is how to address the ethical and moral challenges associated with AI. Even though the concept of “machine ethics” was proposed around 2006, AI ethics is still in the infancy stage. AI ethics is the field related to the study of ethical issues in AI. To address AI ethics, one needs to consider the ethics of AI and how to build ethical AI. Ethics of AI studies the ethical principles, rules, guidelines, policies, and regulations that are related to AI. Ethical AI is an AI that performs and behaves ethically. One must recognize and understand the potential ethical and moral issues that may be caused by AI to formulate the necessary ethical principles, rules, guidelines, policies, and regulations for AI (i.e., Ethics of AI). With the appropriate ethics of AI, one can then build AI that exhibits ethical behavior (i.e., Ethical AI). This paper will discuss AI ethics by looking at the ethics of AI and ethical AI. What are the perceived ethical and moral issues with AI? What are the general and common ethical principles, rules, guidelines, policies, and regulations that can resolve or at least attenuate these ethical and moral issues with AI? What are some of the necessary features and characteristics of an ethical AI? How to adhere to the ethics of AI to build ethical AI?
- Research Article
22
- 10.1177/26334895221112033
- Jan 1, 2022
- Implementation Research and Practice
The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps regarding how to implement and best use AI to add value to mental healthcare services, providers, and consumers. The aim of this paper is to identify challenges and opportunities for AI use in mental healthcare and to describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. The paper is based on a selective review of articles concerning AI in mental healthcare and implementation science. Research in implementation science has established the importance of considering and planning for implementation from the start, the progression of implementation through different stages, and the appreciation of determinants at multiple levels. Determinant frameworks and implementation theories have been developed to understand and explain how different determinants impact on implementation. AI research should explore the relevance of these determinants for AI implementation. Implementation strategies to support AI implementation must address determinants specific to AI implementation in mental health. There might also be a need to develop new theoretical approaches or augment and recontextualize existing ones. Implementation outcomes may have to be adapted to be relevant in an AI implementation context. Knowledge derived from implementation science could provide an important starting point for research on implementation of AI in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research. However, when taking advantage of the existing knowledge basis, it is important to also be explorative and study AI implementation in health and mental healthcare as a new phenomenon in its own right since implementing AI may differ in various ways from implementing evidence-based practices in terms of what implementation determinants, strategies, and outcomes are most relevant.Plain Language Summary: The implementation of artificial intelligence (AI) in mental healthcare offers a potential solution to some of the problems associated with the availability, attractiveness, and accessibility of mental healthcare services. However, there are many knowledge gaps concerning how to implement and best use AI to add value to mental healthcare services, providers, and consumers. This paper is based on a selective review of articles concerning AI in mental healthcare and implementation science, with the aim to identify challenges and opportunities for the use of AI in mental healthcare and describe key insights from implementation science of potential relevance to understand and facilitate AI implementation in mental healthcare. AI offers opportunities for identifying the patients most in need of care or the interventions that might be most appropriate for a given population or individual. AI also offers opportunities for supporting a more reliable diagnosis of psychiatric disorders and ongoing monitoring and tailoring during the course of treatment. However, AI implementation challenges exist at organizational/policy, individual, and technical levels, making it relevant to draw on implementation science knowledge for understanding and facilitating implementation of AI in mental healthcare. Knowledge derived from implementation science could provide an important starting point for research on AI implementation in mental healthcare. This field has generated many insights and provides a broad range of theories, frameworks, and concepts that are likely relevant for this research.
- Research Article
- 10.1108/ijphm-10-2024-0111
- Nov 14, 2025
- International Journal of Pharmaceutical and Healthcare Marketing
Purpose Artificial intelligence (AI) is transforming diabetes management in India, yet its adoption remains limited beyond metropolitan areas. This study aims to explore AI’s role across different phases of diabetes care, focusing on healthcare access, behavioral change and patient engagement in non-metropolitan regions of Maharashtra and Karnataka. Design/methodology/approach A qualitative study was conducted using semi-structured interviews with healthcare professionals and their patients exclusively from 15 hospitals and super-specialty clinics in Maharashtra and Karnataka. Thematic analysis identified key trends in AI adoption, particularly in diagnosis, treatment and patient engagement. Findings AI-powered platforms enhance early access to healthcare, risk assessment and patient-clinician interactions. AI-driven insights support personalized treatment, real-time monitoring and predictive healthcare interventions. In addition, AI fosters behavioral change through continuous engagement and lifestyle recommendations. However, challenges such as infrastructure limitations, data security concerns and lack of AI literacy among healthcare providers hinder widespread adoption. Research limitations/implications This study’s findings are specific to 15 hospitals in Maharashtra and Karnataka, which may limit broader applicability. The perspectives of frontline healthcare workers and patients require deeper exploration. Expanding research to a larger, more diverse sample and conducting longitudinal studies will strengthen insights into AI’s long-term impact on diabetes care. Addressing AI literacy, data security and infrastructure gaps is essential for widespread adoption. Policymakers must establish robust frameworks ensuring algorithmic transparency and equitable access, reinforcing AI’s effectiveness in healthcare. Practical implications Integrating AI in diabetes care enhances early diagnosis, personalized treatment and continuous monitoring, improving patient outcomes. Hospitals must invest in AI literacy programs to equip healthcare professionals with the necessary skills for effective adoption. Policymakers should establish regulatory frameworks to ensure data security, ethical AI use and interoperability with existing healthcare systems. AI developers must focus on user-friendly interfaces to increase patient trust and engagement. Expanding AI adoption in non-metropolitan areas requires infrastructure improvements and public–private partnerships. Strengthening these areas will accelerate AI-driven healthcare transformation, making diabetes management more efficient, accessible and patient-centric. Social implications AI-driven diabetes care can bridge healthcare accessibility gaps, particularly in underserved regions, by enabling early diagnosis and remote monitoring. Increased AI adoption fosters health equity, reducing disparities between urban and rural populations. However, digital literacy and trust in AI remain challenges, necessitating awareness campaigns and patient education initiatives. Ethical AI implementation must prioritize data privacy and algorithmic transparency to maintain public confidence. In addition, AI-driven healthcare can empower individuals with personalized health insights, promoting proactive disease management and healthier lifestyles. By fostering collaboration between healthcare providers, policymakers and technology developers, AI can contribute to a more inclusive and patient-centric healthcare system. Originality/value This study fills a critical research gap by evaluating AI’s impact on diabetes care beyond metropolitan India. With Maharashtra and Karnataka facing high diabetes prevalence and limited AI adoption, understanding its regional healthcare applications is essential. The study highlights practical strategies to overcome adoption barriers, advocating collaborative efforts between hospitals, policymakers and AI developers to maximize AI’s potential in healthcare.
- Book Chapter
- 10.58532/nbennurch302
- Mar 25, 2024
Artificial Intelligence (AI) is transforming how we live and projecting how future should look like. AI is a form of technology developed to mimic human intelligence through decision making, problem solving, learning and improving abilities to meet the demands posited by the environment. The businesses must be agile enough to effectively respond to the markets that are volatile, uncertain, complex and ambiguous. As a result, AI is a critical tool that businesses should adopt to respond to these markets and to remain sustainable. Global agents such as United Nations must ensure that developing countries are not left behind on the implementation of AI as this might have negative implications on the markets of these countries. Since AI is an object that should be embraced by humans, there are ethical concerns that must be addressed to ensure that it is effectively implemented. The implementation of AI should be facilitated in a manner that is safe for humans, and that will improve its uptake. AI researchers have an obligation to explore and explain for the society what AI means and the general ethics surrounding it, risks involved must be clearly articulated for the society. AI developers and implementers must ensure that the right to autonomy for the society is not diminished. This chapter is presenting sustainable AI ethics which constitutes of environmental impacts of AI, carbon emissions and AI modelling, data management, bias in AI, privacy and security, accountability and transparency, international perspectives, and future considerations of AI ethics
- Book Chapter
- 10.62311/nesx/97991
- Feb 27, 2025
Abstract: As Artificial Intelligence (AI) becomes increasingly integrated into digital ecosystems, ensuring security and trust in AI-driven systems is paramount. This chapter explores the growing challenges posed by deepfakes, misinformation, and algorithmic bias, which threaten public trust, democratic integrity, and ethical AI adoption. Deepfake technology enables the manipulation of media, leading to fraud, identity theft, and political disinformation, while AI-driven misinformation amplifies fake news and biased narratives through social media algorithms. Additionally, algorithmic bias in hiring, law enforcement, and finance raises concerns about discrimination and fairness in AI decision-making. To counter these threats, AI security strategies—including deepfake detection, fact-checking AI models, fairness-aware algorithms, and cybersecurity measures—are being developed to ensure responsible AI governance. This chapter examines real-world applications, case studies from Google, IBM, Facebook, and OpenAI, and the role of regulations, AI ethics, and transparency in mitigating AI-related risks. Looking forward, the future of AI governance requires a collaborative approach between industry, academia, and policymakers to develop trustworthy, fair, and secure AI systems that benefit society while minimizing risks. Keywords: AI security, trust in AI, deepfakes, misinformation, algorithmic bias, AI ethics, fairness in AI, AI governance, AI transparency, adversarial attacks, explainable AI, cybersecurity, AI-driven misinformation, AI regulations, AI fairness, AI-driven trust.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.