Crystallizing Theory Evaluation: A Human-AI Collaborative Framework for Multi-Perspective Nursing Theory Analysis.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The NLN/Jeffries Simulation Theory (JST) occupies an important and debated position in nursing education. It is widely used for its practical value but often criticized for lacking theoretical clarity. Traditional evaluation methods rely on single theoretical perspectives and have not resolved this contradiction, leading to fragmented and incomplete assessments. This study presents a Human-AI collaborative framework for comprehensive meta-theoretical analysis and uses JST as a demonstration case. The framework is based on the concept of crystallization and combines evaluation criteria from Fawcett, Meleis, and Walker and Avant. Artificial intelligence is used to organize and systematically analyze complex and multidimensional theoretical evidence, while the human researcher performs interpretation and synthesis. The findings show that JST's lack of precision is not a weakness but a strength that allows it to function as a boundary object. It serves as a flexible structure that supports collaboration across nursing education, research, and practice. This study provides a practical and transparent model for using Human-Centered Artificial Intelligence in academic research. It shows how technological tools can augment rather than displace the critical, interpretive judgment of nursing scholars.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 36
  • 10.3389/frai.2023.976887
Human-centricity in AI governance: A systemic approach.
  • Feb 14, 2023
  • Frontiers in Artificial Intelligence
  • Anton Sigfrids + 3 more

Human-centricity is considered a central aspect in the development and governance of artificial intelligence (AI). Various strategies and guidelines highlight the concept as a key goal. However, we argue that current uses of Human-Centered AI (HCAI) in policy documents and AI strategies risk downplaying promises of creating desirable, emancipatory technology that promotes human wellbeing and the common good. Firstly, HCAI, as it appears in policy discourses, is the result of aiming to adapt the concept of human-centered design (HCD) to the public governance context of AI but without proper reflection on how it should be reformed to suit the new task environment. Second, the concept is mainly used in reference to realizing human and fundamental rights, which are necessary, but not sufficient for technological emancipation. Third, the concept is used ambiguously in policy and strategy discourses, making it unclear how it should be operationalized in governance practices. This article explores means and approaches for using the HCAI approach for technological emancipation in the context of public AI governance. We propose that the potential for emancipatory technology development rests on expanding the traditional user-centered view of technology design to involve community- and society-centered perspectives in public governance. Developing public AI governance in this way relies on enabling inclusive governance modalities that enhance the social sustainability of AI deployment. We discuss mutual trust, transparency, communication, and civic tech as key prerequisites for socially sustainable and human-centered public AI governance. Finally, the article introduces a systemic approach to ethically and socially sustainable, human-centered AI development and deployment.

  • Conference Article
  • Cite Count Icon 101
  • 10.1145/3544548.3580959
What is Human-Centered about Human-Centered AI? A Map of the Research Landscape
  • Apr 19, 2023
  • Tara Capel + 1 more

The application of Artificial Intelligence (AI) across a wide range of domains comes with both high expectations of its benefits and dire predictions of misuse. While AI systems have largely been driven by a technology-centered design approach, the potential societal consequences of AI have mobilized both HCI and AI researchers towards researching human-centered artificial intelligence (HCAI). However, there remains considerable ambiguity about what it means to frame, design and evaluate HCAI. This paper presents a critical review of the large corpus of peer-reviewed literature emerging on HCAI in order to characterize what the community is defining as HCAI. Our review contributes an overview and map of HCAI research based on work that explicitly mentions the terms ‘human-centered artificial intelligence’ or ‘human-centered machine learning’ or their variations, and suggests future challenges and research directions. The map reveals the breadth of research happening in HCAI, established clusters and the emerging areas of Interaction with AI and Ethical AI. The paper contributes a new definition of HCAI, and calls for greater collaboration between AI and HCI research, and new HCAI constructs.

  • Research Article
  • Cite Count Icon 63
  • 10.1016/j.chbr.2023.100319
Towards human-centered artificial intelligence (AI) in architecture, engineering, and construction (AEC) industry
  • Aug 1, 2023
  • Computers in Human Behavior Reports
  • Hamed Nabizadeh Rafsanjani + 1 more

Towards human-centered artificial intelligence (AI) in architecture, engineering, and construction (AEC) industry

  • Research Article
  • Cite Count Icon 10
  • 10.1177/00113921231211580
Looking at human-centered artificial intelligence as a problem and prospect for sociology: An analytic review
  • Nov 17, 2023
  • Current Sociology
  • Andrey V Rezaev + 1 more

Significant advances have been achieved within the past decade in the progress of theoretical and empirical studies of Artificial Intelligence. This article is an attempt, through a review of existing literature on Human-Centered Artificial Intelligence, to raise new questions and provide additional scientific data that will stimulate the potential and foster the forces of sociology and Artificial Intelligence studies to draw closer together. The point of departure for the article is the appearance of Human-Centered Artificial Intelligence in scholarly assembly. The authors then explore routines of the term Human-Centered Artificial Intelligence and the dilemmas of Human-Centered Artificial Intelligence. In what follows, they review how Human-Centered Artificial Intelligence appears in sociological and social sciences production. The authors turn to the closing remarks and finalize formulating three rules of what not to do when studying Human-Centered Artificial Intelligence from a sociological perspective.

  • Research Article
  • 10.33140/amlai.07.01.01
Integrated Human-Centered Artificial Intelligence (HCAI) Performance & Development Model: Bridging the Policy-to-Practice Divide in Performance Management and Employee Development
  • Jan 16, 2026
  • Advances in Machine Learning & Artificial Intelligence
  • Rosemary Uche Packson-Enajerho

Purpose: Despite growing enthusiasm for Artificial Intelligence (AI) in Human Resource Management (HRM), a significant disconnect persists between the aspirational ideals of Human-Centered AI (HCAI) policies and their practical application in organizational performance management and employee development systems. Traditional performance appraisal methods remain infrequent, biased, and disengaging, while AI-based systems risk dehumanization and algorithmic bias if not ethically guided. This paper seeks to bridge this divide by proposing a comprehensive model that harmonizes data-driven analytics with empathetic, human-led management practices. Objective: The study aims to develop and present the Integrated Human-Centered Artificial Intelligence (HCAI) Performance & Development Model, a conceptual framework designed to operationalize the principles of HCAI in performance evaluation and learning systems. The model seeks to transform performance management from a compliance-oriented activity into a continuous, developmental, and ethically grounded process. Methodology: Employing a conceptual research design, this paper utilizes a theory-building approach based on the systematic synthesis and thematic analysis of existing scholarship in AI analytics, continuous performance feedback, motivational theory, and managerial coaching. The resulting model was constructed through iterative conceptual integration, informed by both empirical studies and theoretical frameworks, and elaborated using descriptive narrative supported by a visual schematic. Findings:The research introduces the Integrated HCAI Performance & Development Model, comprising four interdependent components: (1) the AI-Powered Analytics Engine, which aggregates multidimensional performance data to identify trends, skill gaps, and development opportunities; (2) the Human-Centered Interpretation Layer, where managers apply empathetic judgment to contextualize AI-generated insights; (3) the Continuous Feedback & Development Loop, which facilitates ongoing dialogue and co-created learning plans; and (4) the Strategic HR Policy Foundation, ensuring ethical integrity, transparency, and fairness. Collectively, these components align organizational policies with human-centered, technology-enhanced practices. Conclusion: The model provides an actionable framework for integrating intelligent analytics and human empathy to enhance performance management and employee development. It underscores the pivotal role of strategic HR leadership in ethically governing AI systems and cultivating a culture of psychological safety and learning. Future research should focus on empirical validation through longitudinal and quantitative studies to assess the model’s impact on performance outcomes, motivation, and organizational adaptability.

  • Research Article
  • Cite Count Icon 5
  • 10.1080/11038128.2024.2421355
Occupational therapy in the space of artificial intelligence: Ethical considerations and human-centered efforts
  • Nov 8, 2024
  • Scandinavian Journal of Occupational Therapy
  • Vera C Kaelin + 2 more

Background Artificial intelligence (AI) technology is constantly and rapidly evolving and has the potential to benefit occupational therapy (OT) and OT clients. However, AI developments also pose risks and challenges, for example in relation to the ethical principles of OT. One way to support future AI technology aligned with OT ethical principles may be through human-centered AI (HCAI), an emerging branch within AI research and developments with a notable overlap of OT values and beliefs. Objective To explore the risks and challenges of AI technology, and how the combined expertise, skills, and knowledge of OT and HCAI can contribute to harnessing its potential and shaping its future, from the perspective of OT’s ethical values and beliefs. Results Opportunities for OT and HCAI collaboration related to future AI technology include ensuring a focus on 1) occupational performance and participation, while taking client-centeredness into account; 2) occupational justice and respect for diversity, and 3) transparency and respect for the privacy of occupational performance and participation data. Conclusion and Significance There is need for OTs to engage and ensure that AI is applied in a way that serves OT and OT clients in a meaningful and ethical way through the use of HCAI.

  • Research Article
  • Cite Count Icon 19
  • 10.2196/51921
Designing Human-Centered AI to Prevent Medication Dispensing Errors: Focus Group Study With Pharmacists.
  • Dec 25, 2023
  • JMIR Formative Research
  • Yifan Zheng + 6 more

Medication errors, including dispensing errors, represent a substantial worldwide health risk with significant implications in terms of morbidity, mortality, and financial costs. Although pharmacists use methods like barcode scanning and double-checking for dispensing verification, these measures exhibit limitations. The application of artificial intelligence (AI) in pharmacy verification emerges as a potential solution, offering precision, rapid data analysis, and the ability to recognize medications through computer vision. For AI to be embraced, it must be designed with the end user in mind, fostering trust, clear communication, and seamless collaboration between AI and pharmacists. This study aimed to gather pharmacists' feedback in a focus group setting to help inform the initial design of the user interface and iterative designs of the AI prototype. A multidisciplinary research team engaged pharmacists in a 3-stage process to develop a human-centered AI system for medication dispensing verification. To design the AI model, we used a Bayesian neural network that predicts the dispensed pills' National Drug Code (NDC). Discussion scripts regarding how to design the system and feedback in focus groups were collected through audio recordings and professionally transcribed, followed by a content analysis guided by the Systems Engineering Initiative for Patient Safety and Human-Machine Teaming theoretical frameworks. A total of 8 pharmacists participated in 3 rounds of focus groups to identify current challenges in medication dispensing verification, brainstorm solutions, and provide feedback on our AI prototype. Participants considered several teaming scenarios, generally favoring a hybrid teaming model where the AI assists in the verification process and a pharmacist intervenes based on medication risk level and the AI's confidence level. Pharmacists highlighted the need for improving the interpretability of AI systems, such as adding stepwise checkmarks, probability scores, and details about drugs the AI model frequently confuses with the target drug. Pharmacists emphasized the need for simplicity and accessibility. They favored displaying only essential information to prevent overwhelming users with excessive data. Specific design features, such as juxtaposing pill images with their packaging for quick comparisons, were requested. Pharmacists preferred accept, reject, or unsure options. The final prototype interface included (1) checkmarks to compare pill characteristics between the AI-predicted NDC and the prescription's expected NDC, (2) a histogram showing predicted probabilities for the AI-identified NDC, (3) an image of an AI-provided "confused" pill, and (4) an NDC match status (ie, match, unmatched, or unsure). In partnership with pharmacists, we developed a human-centered AI prototype designed to enhance AI interpretability and foster trust. This initiative emphasized human-machine collaboration and positioned AI as an augmentative tool rather than a replacement. This study highlights the process of designing a human-centered AI for dispensing verification, emphasizing its interpretability, confidence visualization, and collaborative human-machine teaming styles.

  • Research Article
  • Cite Count Icon 5
  • 10.2139/ssrn.3762891
Public Procurement and Innovation for Human-Centered Artificial Intelligence
  • Jan 1, 2021
  • SSRN Electronic Journal
  • Wim Naudé + 1 more

The possible negative consequences of Artificial Intelligence (AI) have given rise to calls for public policy to ensure that it is safe, and to prevent improper use and misuse. Human-centered AI (HCAI) draws on ethical principles and puts forth actionable guidelines in this regard. So far however, these have lacked strong incentives for adherence. In this paper we contribute to the debate on HCAI by arguing that public procurement and innovation (PPaI) can be used to incentivize HCAI. We dissect the literature on PPaI and HCAI and provide a simple theoretical model to show that procurement of innovative AI solutions underpinned by ethical considerations can provide the incentives that scholars have called for. Our argument in favor of PPaI for HCAI is also an argument for the more innovative use of public procurement, and is consistent with calls for mission-oriented and challenge-led innovation policies. Our paper also contributes to the emerging literature on public entrepreneurship, given that PPaI for HCAI can advance the transformation of society, but only under uncertaint.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 117
  • 10.3390/s22083043
Digital Transformation in Smart Farm and Forest Operations Needs Human-Centered AI: Challenges and Future Directions.
  • Apr 15, 2022
  • Sensors (Basel, Switzerland)
  • Andreas Holzinger + 9 more

The main impetus for the global efforts toward the current digital transformation in almost all areas of our daily lives is due to the great successes of artificial intelligence (AI), and in particular, the workhorse of AI, statistical machine learning (ML). The intelligent analysis, modeling, and management of agricultural and forest ecosystems, and of the use and protection of soils, already play important roles in securing our planet for future generations and will become irreplaceable in the future. Technical solutions must encompass the entire agricultural and forestry value chain. The process of digital transformation is supported by cyber-physical systems enabled by advances in ML, the availability of big data and increasing computing power. For certain tasks, algorithms today achieve performances that exceed human levels. The challenge is to use multimodal information fusion, i.e., to integrate data from different sources (sensor data, images, *omics), and explain to an expert why a certain result was achieved. However, ML models often react to even small changes, and disturbances can have dramatic effects on their results. Therefore, the use of AI in areas that matter to human life (agriculture, forestry, climate, health, etc.) has led to an increased need for trustworthy AI with two main components: explainability and robustness. One step toward making AI more robust is to leverage expert knowledge. For example, a farmer/forester in the loop can often bring in experience and conceptual understanding to the AI pipeline—no AI can do this. Consequently, human-centered AI (HCAI) is a combination of “artificial intelligence” and “natural intelligence” to empower, amplify, and augment human performance, rather than replace people. To achieve practical success of HCAI in agriculture and forestry, this article identifies three important frontier research areas: (1) intelligent information fusion; (2) robotics and embodied intelligence; and (3) augmentation, explanation, and verification for trusted decision support. This goal will also require an agile, human-centered design approach for three generations (G). G1: Enabling easily realizable applications through immediate deployment of existing technology. G2: Medium-term modification of existing technology. G3: Advanced adaptation and evolution beyond state-of-the-art.

  • Conference Article
  • Cite Count Icon 12
  • 10.1145/3544549.3585752
Towards a Human-Centred Artificial Intelligence Maturity Model
  • Apr 19, 2023
  • Maria Hartikainen + 2 more

Artificial intelligence (AI) is becoming a central building block of computational systems. Following the long traditions of human-centered design, Human-Centered AI (HCAI) emphasises the importance of putting humans and various societal considerations in the centre of the development. However, the question is: how to realise HCAI when designing systems that utilise novel computational tools and require consideration of increasingly broad set of requirements, spanning from fairness and transparency to accountability and ethics? The purpose of our study is to support the AI development practices in companies in order for the humans to have AI solutions that are efficient, trustworthy, and safe. To this end, we propose a maturity model for HCAI (HCAI-MM). In this paper we present the first phase of the model development, in which the central building blocks of HCAI are specified and initial company requirements for the model's structure and content are evaluated with four AI developers.

  • Research Article
  • Cite Count Icon 1
  • 10.1080/09544828.2025.2518907
Human-centered AI design: developers’ perspectives
  • Jun 24, 2025
  • Journal of Engineering Design
  • Patrick Karekezi + 2 more

As Artificial Intelligence (AI) advances, it plays an increasingly central role in our society and future. While many researchers highlight its transformative potential, this optimism is tempered by challenges stemming from uneven access to AI, mirroring historical patterns of wealth, education, and resources. This disparity affects AI's design. Although Human-Centered AI (HCAI) is a significant topic in academia, industry practices lag. To advance HCAI, we assessed current AI development practices through interviews with developers from various AI companies. Thematic analysis revealed four key factors influencing AI design approaches: (i) developers’ motivations and technical background, (ii) company practices and peer pressure, (iii) market demands, and (iv) regulatory loopholes. The study offers insights to guide the design of more human-centered AI.

  • Research Article
  • 10.1080/17517575.2025.2595706
The impact of human-centred artificial intelligence on firms’ workforce demand in Industry 5.0 transition
  • Dec 5, 2025
  • Enterprise Information Systems
  • Yuting Wang + 3 more

This paper uses situated artificial intelligence theory (SAIT) and technology, organization, and environment (TOE) framework, to investigate the impact of human-centered artificial intelligence (HCAI) on workforce demand, crucial for operational resilience and competitive advantage. Patent and workforce data from Chinese listed companies (2015-23) reveals that HCAI significantly increases regular workforce demand by externalizing tacit knowledge and drives non-regular workforce demand through technological complexity. Moreover, digital transformation and financing constraints weaken HCAI’s positive effects on workforce demand, while industry competition strengthens them. These findings offer theoretical, managerial and policy implications for integrating HCAI with workforce strategies during the Industry 5.0 transition.

  • Research Article
  • Cite Count Icon 41
  • 10.1016/j.caeai.2024.100306
Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices
  • Sep 19, 2024
  • Computers and Education: Artificial Intelligence
  • Yao Fu + 1 more

Navigating the ethical terrain of AI in education: A systematic review on framing responsible human-centered AI practices

  • Book Chapter
  • Cite Count Icon 1
  • 10.1016/b978-0-323-99891-8.00004-8
Chapter 5 - Human-centered artificial intelligence
  • Jan 1, 2023
  • Innovations in Artificial Intelligence and Human-Computer Interaction in the Digital Era
  • Zainab Aizaz + 1 more

Chapter 5 - Human-centered artificial intelligence

  • Research Article
  • Cite Count Icon 4
  • 10.55612/s-5002-059-001sp
A Research Framework Focused on AI and Humans instead of AI versus Humans
  • Dec 15, 2023
  • Interaction Design and Architecture(s)
  • Gerhard Fischer

Despite lacking a shared understanding and a generally accepted definition, Artificial intelligence (AI) is promoted and credited with miraculous abilities to solve all problems. To gain a more nuanced and deeper understanding of the design trade-offs associated with AI, this paper proposes a research framework that contrasts two competing frameworks: (1) AI versus Humans (characterized by strong AI and Artificial General Intelligence) focused on replacing human beings and (2) AI and Humans (characterized by intelligence augmentation and human-centered AI) focused on empowering human beings as individuals and communities. The arguments in the paper are supported by research activities that explored conceptual frameworks and inspiring prototypes. These developments have resulted in gaining a deeper understanding of how AI-type systems can contribute to quality of life aspects with a specific focus on rethinking and reinventing learning, education, working, and collaboration in the digital age.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.