Dialogues of Sense and Algorithm: Reconfiguring Arts-Based Research in the AI Era

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

As qualitative research is increasingly shaped by artificial intelligence (AI), new challenges emerge. While AI expands creative possibilities, its integration also risks reducing sensory knowledge to abstract symbols and reinforcing cultural bias. This article introduces Sensory–Algorithmic Dialogue (SAD) as a critical methodological framework that reorients how qualitative research, particularly arts-based research (ABR), engages with AI. Rather than fusing sensory ethnography and algorithmic processes, SAD sustains their difference, treating misalignments not as flaws but as productive frictions. Drawing on ethnographic fieldwork in Jingdezhen, a thousand-year-old ceramic art community, we show how AI’s misreading becomes entry points for reflection, where researchers’ multisensory expertise reclaims knowledge overlooked by algorithmic abstraction. In doing so, SAD repositions AI from a passive tool to an active interlocutor, turning algorithmic limitations into opportunities for creativity, critique, and methodological innovation, while deepening ABR’s capacity to sustain embodied, multisensory, and culturally grounded inquiry in the AI era.

Similar Papers
  • Research Article
  • 10.12681/homvir.43478
Qualitative Inquiry in the era of artificial intelligence: Why and how to keep the practice human?
  • Nov 25, 2025
  • Homo Virtualis
  • Alexios Brailas + 1 more

How might our research practice, and our very way of knowing, change if, instead of rushing to feed qualitative data into ‘intelligent’ machines for the supposed ever optimal analysis, we returned our attention to the living moment of data production itself, treating it as an embodied, mindful, relational, and transformative act of co-creation, presence, and meaning-making that no algorithm can replicate? In this era of generative Artificial Intelligence (AI), qualitative research is at a crossroads. Large Language Models (LLMs) promise a more objective and efficient way to analyze vast volumes of qualitative data. This may seem like a magical solution for the positivist approach to qualitative inquiry, the so-called small q tradition. Yet, it also presents a dystopian prospect for the more interpretive, relational, culturally situated, and social-constructionist approaches of the big Q tradition. In the latter case, the risk is that the qualitative researcher becomes overshadowed by the machine, with the process losing its relational and generative capacities. This special issue addresses precisely this tension by showcasing a series of undergraduate research projects, demonstrating why it is so important to keep qualitative inquiry, and especially qualitative interviewing, a profoundly human practice. Generative AI is here to stay. The challenge now becomes how to make qualitative inquire even more process-oriented, relational, meaningful, embodied, and transformative, and how to use AI technologies in ways that serve this purpose.

  • Research Article
  • Cite Count Icon 93
  • 10.1001/jama.2023.25057
Three Epochs of Artificial Intelligence in Health Care
  • Jan 16, 2024
  • JAMA
  • Michael D Howell + 2 more

ImportanceInterest in artificial intelligence (AI) has reached an all-time high, and health care leaders across the ecosystem are faced with questions about where, when, and how to deploy AI and how to understand its risks, problems, and possibilities.ObservationsWhile AI as a concept has existed since the 1950s, all AI is not the same. Capabilities and risks of various kinds of AI differ markedly, and on examination 3 epochs of AI emerge. AI 1.0 includes symbolic AI, which attempts to encode human knowledge into computational rules, as well as probabilistic models. The era of AI 2.0 began with deep learning, in which models learn from examples labeled with ground truth. This era brought about many advances both in people’s daily lives and in health care. Deep learning models are task-specific, meaning they do one thing at a time, and they primarily focus on classification and prediction. AI 3.0 is the era of foundation models and generative AI. Models in AI 3.0 have fundamentally new (and potentially transformative) capabilities, as well as new kinds of risks, such as hallucinations. These models can do many different kinds of tasks without being retrained on a new dataset. For example, a simple text instruction will change the model’s behavior. Prompts such as “Write this note for a specialist consultant” and “Write this note for the patient’s mother” will produce markedly different content.Conclusions and RelevanceFoundation models and generative AI represent a major revolution in AI’s capabilities, ffering tremendous potential to improve care. Health care leaders are making decisions about AI today. While any heuristic omits details and loses nuance, the framework of AI 1.0, 2.0, and 3.0 may be helpful to decision-makers because each epoch has fundamentally different capabilities and risks.

  • Research Article
  • 10.3233/jifs-189940
The application and regulation of administrative discretion in the era of artificial intelligence
  • Jan 1, 2021
  • Journal of Intelligent & Fuzzy Systems
  • Li Li + 2 more

The advent of the era of artificial intelligence makes it possible for administrative subjects to use intelligent machines and systems to engage in administrative activities. Among them, the administrative discretion, which is the core of administrative law, is particularly concerned about the use of artificial intelligence. In the era of weak artificial intelligence, intelligent administrative discretion has been widely used in all aspects of administrative law enforcement, but there is a phenomenon that administrative subjects are negligent in exercising discretion. Looking forward to the era of strong artificial intelligence, artificial intelligence machines or systems may have the ability and power to independently exercise administrative discretion, but they cannot become the real administrative discretion subject. Intelligent administrative discretion is conducive to administrative efficiency and guarantees the fairness of administrative behavior, but it also faces legal risks such as unfair results of discretion, opaque algorithm settings, and weakening of government functions. Only by strengthening the legal basis, protecting the rights of the counterparty, improving the accuracy of the algorithm, and improving the status of the administrative subject can the administrative discretionary behavior under the background of artificial intelligence be effectively regulated.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 2
  • 10.47941/jmlp.2162
Intellectual Property Rights in the Era of Artificial Intelligence
  • Aug 2, 2024
  • Journal of Modern Law and Policy
  • Yvonne Nyaboke

Purpose: The general objective of this study was to explore Intellectual Property Rights in the era of Artificial Intelligence. Methodology: The study adopted a desktop research methodology. Desk research refers to secondary data or that which can be collected without fieldwork. Desk research is basically involved in collecting data from existing resources hence it is often considered a low cost technique as compared to field research, as the main cost is involved in executive’s time, telephone charges and directories. Thus, the study relied on already published studies, reports and statistics. This secondary data was easily accessed through the online journals and library. Findings: The findings reveal that there exists a contextual and methodological gap relating to Intellectual Property Rights in the era of Artificial Intelligence. Preliminary empirical review revealed that the era of Artificial Intelligence (AI) has significantly transformed the landscape of Intellectual Property Rights (IPR), presenting both opportunities and challenges. It highlighted that traditional IP laws are increasingly inadequate to address the complexities introduced by AI-generated content, necessitating a rethinking of existing frameworks. The study emphasized the need for recognizing AI's role in the creation of new works and inventions and the importance of developing balanced approaches to protect both human and AI contributions. Ethical considerations, such as accountability, transparency, and fairness, were also deemed crucial in ensuring responsible AI use. Overall, the study called for a comprehensive and proactive approach to integrate AI into IPR, ensuring robust protections while fostering innovation. Unique Contribution to Theory, Practice and Policy: The Technological Determinism Theory, Innovation Diffusion Theory and Legal Realism Theory may be used to anchor future studies on Intellectual Property Rights in the era of Artificial Intelligence. The study recommended revising existing IP laws to explicitly include AI-generated content and inventions, clarifying criteria for authorship and inventorship. It suggested expanding theoretical frameworks to accommodate AI contributions, emphasizing the collaborative nature of human and AI creativity. Practical measures, such as enhanced cybersecurity and legal safeguards for AI-generated trade secrets, were advised. Policy-wise, the study advocated for international cooperation to harmonize IP laws concerning AI. Developing ethical guidelines for responsible AI use and implementing education programs to inform stakeholders about AI and IP implications were also recommended. These measures aimed to create a balanced IP framework supporting innovation while protecting the rights of all stakeholders.

  • Research Article
  • 10.46914/2959-4197-2025-1-3-133-143
Protection of intellectual rights in the era of artificial intelligence: experience of Kazakhstan and world practice
  • Sep 30, 2025
  • Eurasian Scientific Journal of Law
  • A I Rzabay + 2 more

The digital age, marked by the rapid development of artificial intelligence (AI), presents us with new challenges, especially in the field of intellectual property protection. AI creates both new opportunities for creativity and innovation, and unprecedented risks of copyright infringement, patent protection violations, and other forms of intellectual property infringement. The main objective of this research is a comprehensive analysis of legal mechanisms for protecting intellectual property rights in the context of rapid AI development, focusing on the Kazakhstani legal system and comparative analysis of international experience. The scientific significance of this work lies in the in depth analysis of a current and insufficiently studied problem – the protection of intellectual property rights in the context of AI. The research will contribute to the development of theoretical understanding of the legal regulation of intellectual property in the digital environment. The research employed a comprehensive approach, combining the analysis of legal acts with comparative and doctrinal analysis. A complete investigation revealed some problems in the legislation, namely, the absence of fundamental concepts related to AI. According to the authors, it is necessary to delve deeper into international practice, particularly that of the USA, examining both its legislation and case law. The practical significance of the work lies in the development of specific recommendations for lawmakers, law enforcement officials, and business representatives on improving the legal framework and law enforcement practices in Kazakhstan. The research results will help reduce the risks of intellectual property infringement, stimulate innovation, and create a more favorable environment for the development of the AI industry in the country.

  • Abstract
  • Cite Count Icon 2
  • 10.1152/advan.00253.2024
Navigating the frontier of AI-assisted student assignments: challenges, skills, and solutions.
  • Sep 1, 2025
  • Advances in physiology education
  • Suzanne Estaphan + 2 more

The rise of artificial intelligence (AI) is transforming educational practices, particularly in assessment. While AI may support the students in idea generation and summarization of source materials, it also introduces challenges related to content validity, academic integrity, and the development of critical thinking skills. Educators need strategies to navigate these complexities and maintain rigorous, ethical assessments that promote higher order cognitive skills. This article provides practical guidance for educators on designing take-home assessments (e.g. research-based assignments) in the AI era. This guidance was developed through a collaborative, consensus-driven process involving a consortium of three educators with diverse academic backgrounds, career stages, and perspectives on AI in education. Members, holding experience in higher education across the United Kingdom, United States of America, Australia, and Middle East and North Africa regions, brought varied insights into AI's role in education. The team engaged in an iterative process of refining recommendations through biweekly virtual meetings and offline discussions. Four key recommendations are presented 1) codeveloping AI literacy among students and educators, 2) designing assessments that prioritize process over output, 3) validating learning through AI-free assessments, and 4) preparing students for AI-enhanced workplaces by developing AI communication skills and promoting human-AI collaboration. These strategies emphasize ethical AI use, personalized feedback, and creativity. By adopting these approaches, educators can balance the benefits and risks of AI in assessments, fostering authentic learning while preparing students for the challenges of an AI-driven world.NEW & NOTEWORTHY This paper presents a framework to effectively design take-home assessments in the generative artificial intelligence (AI) era with four key recommendations to navigate the challenges and opportunities posed by generative AI. From codeveloping AI literacy to fostering human-AI collaboration, the strategies empower educators to promote authentic learning, critical thinking, and ethical AI use. Adaptable to various contexts, these insights help prepare students for an AI-driven future while maintaining academic rigor and integrity.

  • Research Article
  • 10.62381/h241a10
Digital Innovation Pathways in the Design of Macroeconomics Courses in the Era of Artificial Intelligence
  • Oct 1, 2024
  • Higher Education and Practice
  • Lingshan Li + 2 more

The advent of the artificial intelligence (AI) era presents significant challenges to traditional education, necessitating innovative approaches to teaching methods. Integrating artificial intelligence into classroom instruction to enhance course quality has become a pressing concern. This study explores the design of macroeconomics courses in the context of artificial intelligence, addressing the limitations of traditional teaching methods and highlighting the potential benefits of AI integration. These advantages include personalized learning experiences, increased teaching efficiency, enhanced interactivity, real-time feedback, and access to expanded teaching resources. Building on these insights, the research proposes a comprehensive framework for innovating macroeconomics courses design. First, it emphasizes content innovation, which involves integrating AI-related economic phenomena and frontier developments in the discipline. Second, it focuses on methodological innovation, leveraging AI-powered tools to facilitate teaching and incorporating practical, hands-on learning experiences. Finally, it advocates for assessment reform, emphasizing process-oriented evaluations and introducing diverse, multi-dimensional assessment methods. These innovative strategies are shown to enhance the quality of macroeconomics education, improve student learning outcomes, and better align educational programs with the demands of a rapidly evolving economy in the AI era.

  • Research Article
  • Cite Count Icon 2
  • 10.1177/16094069251337583
Qualitative Research in the Era of AI: A Return to Positivism or a New Paradigm?
  • Apr 1, 2025
  • International Journal of Qualitative Methods
  • Georgios Chatzichristos

The integration of artificial intelligence (AI) into qualitative research is transforming the landscape of social inquiry, raising significant epistemological and methodological, questions. This study explores the dual potential of AI to enhance the scalability in qualitative research while challenging its interpretive depth. It situates this tension within the historical trajectory of qualitative research -and specifically Grounded Theory- from positivist to constructivist paradigms, highlighting how AI’s automated, data-driven approaches may signal a resurgence of positivist assumptions. Key research questions guide this exploration: To what extent do qualitative researchers harness AI’s efficiencies in data analysis? Can the extended use of AI in qualitative research impact the depth and reflexivity essential to interpretive analysis? To delve into these questions the study employs a Technology Acceptance Model (TAM) survey combined with semi-structured interviews, strategically targeting European researchers to explore AI’s perceived usefulness, ease of use, and implications for qualitative methodologies. Survey and interview findings reveal a generational divide: early-career researchers embrace AI’s capacity for large-scale data analysis and thematic identification, while experienced researchers express scepticism about its impact on qualitative reflexivity and contextual richness. This generational gap implies that the receptiveness of younger researchers could lead to a gradual return to a methodological positivism. While this study brings a generational divide under the spotlight, future directions call for deeper investigations into the structural inequalities shaping AI adoption, such as access to resources, geography and gender.

  • Research Article
  • 10.24256/ideas.v13i1.6133
Micro-Ethnography Approach in Using Technology to Support Learning Interaction
  • Apr 2, 2025
  • IDEAS: Journal on English Language Teaching and Learning, Linguistics and Literature
  • Yogi Novario Nandes + 2 more

Technological and Artificial Intelligence (AI) development provides challenges and opportunities for lecturers to create dynamic and relevant classroom engagements, especially in courses that demand intensive communication skills, such as English Education. A micro-ethnography approach can help lecturers analyze social engagements in the classroom in detail, adapt their teaching style, and increase student engagement by combining technology and a personal approach. This study aims to explore lecturers' strategies for strengthening classroom interaction in English Education study program using the micro-ethnography approach in the era of technology and AI. This research uses the literature review method and qualitative descriptive analysis to collect and analyze data on lecturers' strategies for strengthening classroom interaction in the era of AI and technology in English language teaching. The results of this study show that the utilization of technology and the micro-ethnography approach in education, including in the English Education study program, is crucial to strengthen classroom interaction in the digital era. Technologies such as AI provide opportunities to understand students' learning preferences more personally, while micro-ethnography approaches help lecturers create inclusive and adaptive learning experiences. Keywords: AI; Learning Interaction; Micro-Ethnography; Technology

  • Research Article
  • 10.15575/diroyah.v8i1.29382
Prophetic Communication in the Era of Artificial Intelligence: Efforts to Convey Comprehensive Islamic Messages
  • Nov 4, 2023
  • Diroyah : Jurnal Studi Ilmu Hadis
  • Aang Ridwan

The study essentially aims to analyze the changes occurring within the practice of dakwah as a form of communicating Islamic messages in the era of digital technology and artificial intelligence. The practice of dakwah must retain its prophetic communication dimension even as it adjusts to new communication formats brought about by the digital era and artificial intelligence. This study employs a qualitative approach with a descriptive-analytical method. This method was chosen for its capacity to provide a profound understanding of complex phenomena within their contexts, aligned with the research objective to analyze the changes in communication patterns and their effects on dakwah practices. Data is gathered through documentation and observation of dakwah practices across various platforms and digital media. The study indicates that the shifts in dakwah communication patterns in the digital era and the advancement of artificial intelligence have significant and intricate impacts on the comprehensive communication of Islamic prophetic messages. Key points include: (1) Changes in the meaning and practice of dakwah depict a shift in focus from the spiritual and moral aspects to the dissemination of Islamic messages through technological platforms; (2) Changes in dakwah communication formats and platforms reflect the adaptation of preachers to digital trends and audience preferences; (3) The influence of social media in dakwah practices opens broad avenues for disseminating Islamic messages; (4) Artificial intelligence (AI) has notably contributed to presenting dakwah content to the public; (5) Normative dynamics in digital-era dakwah communication reveal the challenges in maintaining a balance between popularity and the integrity of religious teachings; (6) Ethical challenges and controversies in digital dakwah underscore the need to uphold moral and ethical values in religious communication; and (7) Education and supervision emerge as pivotal in addressing the challenges of digital dakwah. Content creators, preachers, and society at large must be equipped with a proper understanding of technology usage aligned with scholarly and ethical responsibility, ensuring that the conveyance of religious messages remains accurate, substantial, and consistent with Islamic values.

  • Research Article
  • Cite Count Icon 1
  • 10.58567/jre02020001
Regional Economic Development in the AI Era: Methods, Opportunities, and Challenges
  • Oct 27, 2023
  • Journal of Regional Economics
  • Robertas Damaševičius

<p class="MsoNormal" style="margin-top: 6pt; text-align: justify;"><span lang="EN-US" style="font-family: arial, helvetica, sans-serif;">The dawn of the Artificial Intelligence (AI) era presents a plethora of new possibilities for analyzing regional economic development. The present article provides an in-depth exploration of the methods employed in this field, highlighting the immense opportunities that AI offers while also addressing potential challenges. The role of AI is crucial in complex data handling, enabling efficient analyses of intricate regional economic patterns. This capacity is paramount in shaping economic policies and strategies that are reflective of each region's unique needs and potential. The article firstly explores various AI methods used in economic analysis, including but not limited to machine learning, deep learning, and natural language processing. It delves into the application of these methods in discerning development trends, predicting economic shifts, and identifying strategic economic drivers unique to various regions. Subsequently, the potential of AI to transform regional economic analysis is discussed, encompassing its capability to process large and complex datasets, its power to predict future trends based on past and present data, and its ability to aid in strategic decision-making. However, this new era of AI-driven economic analysis is not without challenges. The latter part of this article thus confronts the issues related to data privacy, ethical use of AI, and the necessity of interdisciplinary skills in AI and economics. This exploration contributes to a broader understanding of how AI is transforming the landscape of regional economic development analysis, illuminating both its present use and future implications. By understanding these dynamics, we can better harness the potential of AI to advance economic prosperity in various regions around the globe.</span></p>

  • Research Article
  • Cite Count Icon 67
  • 10.1080/10400419.2022.2107850
Redefining Creativity in the Era of AI? Perspectives of Computer Scientists and New Media Artists
  • Aug 22, 2022
  • Creativity Research Journal
  • Roosa Wingström + 2 more

Artificial intelligence (AI) has breached creativity research. The advancements of creative AI systems dispute the common definitions of creativity that have traditionally focused on five elements: actor, process, outcome, domain, and space. Moreover, creative workers, such as scientists and artists, increasingly use AI in their creative processes, and the concept of co-creativity has emerged to describe blended human–AI creativity. These issues evoke the question of whether creativity requires redefinition in the era of AI. Currently, co-creativity is mostly studied within the framework of computer science in pre-organized laboratory settings. This study contributes from a human scientific perspective with 52 interviews of Finland-based computer scientists and new media artists who use AI in their work. The results suggest scientists and artists use similar elements to define creativity. However, the role of AI differs between the scientific and artistic creative processes. Scientists need AI to produce accurate and trustworthy outcomes, whereas artists use AI to explore and play. Unlike the scientists, some artists also considered their work with AI co-creative. We suggest that co-creativity can explain the contemporary creative processes in the era of AI and should be the focal point of future creativity research.

  • Research Article
  • 10.3390/healthcare13233057
Ethical Decision-Making Guidelines for Mental Health Clinicians in the Artificial Intelligence (AI) Era
  • Nov 25, 2025
  • Healthcare
  • Yegan Pillay

The meteoric rise in generative AI has created both opportunities and ethical challenges for the mental health disciplines, namely in clinical mental health counseling, psychology, psychiatry, and social work. While these disciplines have been grounded in well-established ethical principles such as autonomy, beneficence, justice, fidelity, and confidentiality, the exponential ubiquity of AI in society has rendered mental health professionals unsure as to how to navigate ethical decision making in the AI era. The author proposes a preliminary ethical framework which synthesizes the code of ethics of the American Counseling Association (ACA), the American Psychological Association (APA), the American Medical Association (AMA), and the National Association of Social Workers (NASW), which is then organized around five pillars: (i) autonomy and informed consent; (ii) beneficence and non-malfeasance; (iii) confidentiality, privacy, and transparency; (iv) justice, fairness and inclusiveness; and (v) fidelity, professional integrity, and accountability. These pillars are juxtaposed with AI ethical guidelines developed by multinational organizations, governmental and non-governmental entities, and technology corporations. The resulting integrated ethical framework provides a practical cogent structure that mental health professionals can use when navigating this uncharted terrain. A case study based on the proposed ethical framework and strategies that clinical mental professionals can consider prior to incorporating AI into their clinical repertoire are offered. Limitations of the framework and its implications for future research are addressed.

  • Research Article
  • 10.2196/79961
Evolving Health Information–Seeking Behavior in the Context of Google AI Overviews, ChatGPT, and Alexa: Interview Study Using the Think-Aloud Protocol
  • Oct 7, 2025
  • Journal of Medical Internet Research
  • Claire Wardle + 2 more

BackgroundOnline health information seeking is undergoing a major shift with the advent of artificial intelligence (AI)–powered technologies such as voice assistants and large language models (LLMs). While existing health information–seeking behavior models have long explained how people find and evaluate health information, less is known about how users engage with these newer tools, particularly tools that provide “one” answer rather than the resources to investigate a number of different sources.ObjectiveThis study aimed to explore how people use and perceive AI- and voice-assisted technologies when searching for health information and to evaluate whether these tools are reshaping traditional patterns of health information seeking and credibility assessment.MethodsWe conducted in-depth qualitative research with 27 participants (ages 19-80 years) using a think-aloud protocol. Participants searched for health information across 3 platforms—Google, ChatGPT, and Alexa—while verbalizing their thought processes. Prompts included both a standardized hypothetical scenario and a personally relevant health query. Sessions were transcribed and analyzed using reflexive thematic analysis to identify patterns in search behavior, perceptions of trust and utility, and differences across platforms and user demographics.ResultsParticipants integrated AI tools into their broader search routines rather than using them in isolation. ChatGPT was valued for its clarity, speed, and ability to generate keywords or summarize complex topics, even by users skeptical of its accuracy. Trust and utility did not always align; participants often used ChatGPT despite concerns about sourcing and bias. Google’s AI Overviews were met with caution—participants frequently skipped them to review traditional search results. Alexa was viewed as convenient but limited, particularly for in-depth health queries. Platform choice was influenced by the seriousness of the health issue, context of use, and prior experience. One-third of participants were multilingual, and they identified challenges with voice recognition, cultural relevance, and data provenance. Overall, users exhibited sophisticated “mix-and-match” behaviors, drawing on multiple tools depending on context, urgency, and familiarity.ConclusionsThe findings suggest the need for additional research into the ways in which search behavior in the era of AI- and voice-assisted technologies is becoming more dynamic and context-driven. While the sample size is small, participants in this study selectively engaged with AI- and voice-assisted tools based on perceived usefulness, not just trustworthiness, challenging assumptions that credibility is the primary driver of technology adoption. Findings highlight the need for digital health literacy efforts that help users evaluate both the capabilities and limitations of emerging tools. Given the rapid evolution of search technologies, longitudinal studies and real-time observation methods are essential for understanding how AI continues to reshape health information seeking.

  • Research Article
  • 10.62517/jhet.202515621
A Study on the Connotation and Dimensional Structure of Postgraduate Innovation Ability in the Context of Artificial Intelligence
  • Dec 1, 2025
  • Journal of Higher Education Teaching
  • Lei Wang + 2 more

In the context of the rapid development of artificial intelligence, the connotation and cultivation methods of postgraduate innovation ability are undergoing profound changes. The traditional framework of innovation ability emphasizes "knowledge memorization + problem-solving" ability, which is no longer compatible with the new look of the current era of data-driven, human-machine collaboration and interdisciplinary multi-dimensional artificial intelligence. In the academic thinking, this study believes that innovation ability is not a "professional skill" but a "comprehensive quality". The impact of generative artificial intelligence on the cultivation of innovation ability is a multi-dimensional and complex mechanism that requires the integration of multiple theories for explanation. It also emphasizes that the cultivation of innovation ability cannot be separated from the idea of collaboration among multiple stakeholders such as the government, universities, and society. Based on literature review and analysis, expert interviews, questionnaire surveys, and case studies, this paper redefines the new connotation and structural framework of postgraduate innovation ability, constructing a multi-dimensional innovation ability framework encompassing digital literacy, technological thinking, and teamwork. This framework emphasizes digital literacy as the foundation, technological thinking as the core, and teamwork as the support, while also integrating critical thinking, ethical concepts, and interdisciplinary integration skills. It fully aligns with the new standards for high-level talent in the era of artificial intelligence. This research can provide a theoretical basis and practical pathways for optimizing postgraduate training programs, reforming teaching methods, and innovating mentor guidance mechanisms.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.