Abstract

Artificial intelligence (AI) will revolutionize health science education. And it is happening now. The most recent reason? ChatGPT—a newly available Chatbot AI with pronounced synthesis and language capabilities (OpenAI, 2023). Like previous tech interventions and platforms, such as Twitter, that revolutionized communication and influenced research and public discourses in the health sciences, ChatGPT will not only influence health education, practice and research, but will shift them profoundly. While the time to fully pre-empt ChatGPT development is past, the opportunity for nursing to respond well has not. But first, ChatGPT and its possible impacts on higher education in the health sciences must be considered. Here, we introduce ChatGPT, highlight its likely impact on higher education, and what nursing and the health sciences can do about it. ChatGPT is an AI trained as an interactive conversational model chatbot capable of responding to prompts in various text formats (Gleason, 2022; OpenAI, 2023). ChatGPT runs off of GPT-3 (Generative Pre-Trained Transformer-3), the technology underlying its ability to understand and generate text. This means the application can perform more sophisticated functions in response to users' entries, including—seeking and clarifying through follow-up questions, challenging underlying definitions, and stating and questioning assumptions—among many. There are clear parallels between this and many other scholarly discourses—including the production and evaluation of student writing, conference conversations, and academic publications (Kamler & Thomson, 2006). These sophisticated functions have quickly raised attention. The AI laboratory, OpenAI, only launched ChatGPT on December 2, 2022 and has already gained attention in the mass media and academic press (Gleason, 2022; Graham, 2022; Wingard, 2023) While Chatbots and AI have existed for approximately 60 and 70 years, respectively (Ina, 2022), ChatGPT is different. It will have a transformative effect on higher education, especially around writing and student work. First, it is “generative.” ChatGPT can create new text based on a range of inputs, avoiding the rote and repetitive responses of other AI chatbots—a glaring clue for any customer using a commercial AI customer service chat they are not, after all, having their complaint heard by a real person. ChatGPT can also demonstrate sensitivity to context and, from this, it can generate text that sounds natural—more human. Second, ChatGPT is a free to use application, thereby removing the pay-to-access barrier experienced by students seeking to access other AI applications. Given this, and its remarkable capacities, ChatGPT had over 1 million users within the first 5 days of its release (Gleason, 2022). Third, ChatGPT provides virtually instant, comprehensive and logical text responses in any format and genre requested of it (including numerous essay, prose, tweets, LinkedIn posts, or columns). Crucially, some authors report that this text is undetectable by current plagiarism software. For instance, Gleason (2022) asserted that there is no way to prove that ChatGPT generated text is AI generated. Fourth, the scale and complexity of ChatGPT is remarkable; it is among the largest language models ever created with 175 billion parameters, enabling ChatGPT to avoid rigid, script-based responses (OpenAI, 2023). The ability of ChatGPT to respond like a human would is its greatest strength—and paradoxically its greatest risk in scenarios in which a human response is necessary for ethical and scholarly standards and integrity. The possible impacts of ChatGPT on higher education are transformative, and ChatGPT is testing our ability to envision and respond to these. Should those in nursing and other parts of higher education integrate ChatGPT in the service of learning, avoid considering its impact altogether, or acknowledge it but prohibit students' engagement, assuming then that students will abide by this regulation? To address these vital questions first, we asked ChatGPT what its likely impacts are going to be on higher education. Its response was as follows: It is worth noting that GPT-3, although powerful, is not a silver bullet solution for all problems in education and proper implementation and ethical use of the technology is important. Also, there may be some concerns with privacy and security when using GPT-3 in educational settings.” ChatGPT was able to provide this coherent, cogent and very human-like response within 10s of receiving the query. It was equally capable of responding meaningfully to follow-up prompts, providing additional information, depth and clarity in its responses, and composing tweets and LinkedIn postings to address the topic. As ChatGPT itself indicated, its profile in higher education is one of balancing benefits with risks, and hence, considering various stances towards its use can help educators and educational institutions makes such critical decisions. Here, we present three different hypothetical case scenarios reflecting responses to ChatGPT by nursing and the health sciences, postulating possible impacts for each. Previously, we argued that Twitter was a conversation that would be ongoing regardless of nursings' acceptance of it (Archibald & Clark, 2014). Nursing and other health professions face a similar, but more challenging decision about whether or not to respond to the emergence of ChatGPT. With already pressing concerns—avoidance is tempting and superficially may feel good, at least initially. Avoidance may stem from many causes, including: fear, a lack of awareness of the existence of this emergent platform and technology, a lack of appreciation of the full scope of its capabilities and possible impacts, or ignorance or dismissal of the possible influence ChatGPT may have on higher education. Yet in the famous words of Abraham Lincoln: “You cannot escape the responsibility of tomorrow by evading it today.” Ignoring the existence of ChatGPT or avoiding its inevitable impact on higher education will result in multi-level harms almost immediately. It would be a grave mistake for nursing and the health sciences for many reasons. First, there will be no structures in place to ensure the integrity of student learning, particularly since safeguards would not be in place to determine if student writing was AI generated. Students are writing essays with ChatGPT at the very moment we write this. We are already behind the curve. Second, a ‘head-in-the-sand’ perspective does little for the professional images of nursing and other health professions. Rather, avoidance may have the negative effect of highlighting a lack of timeliness, or at worst, even make approaches to professional education appear irrelevant. Third, students will not have the benefit of integrating ChatGPT into their learning, meaning the opportunity to treat ChatGPT as another learning tool will be depleted. Fourth, an avoidance approach means that shortcomings in ChatGPT cannot be critically analysed, again undercutting the potential application as a learning tool. By avoiding the use of ChatGPT in higher education, educators also avoid the risk of manifold and serious privacy and security concerns, which ChatGPT adeptly highlighted for us. Without safeguards, students' personal data could be subject to unauthorized access, or other forms of misuse. If not secured, this information could be used for nefarious purposes. For those overseeing nursing (and indeed other professions health education), this risk extends not only to academia and education, but also to public safety. In the prohibition stance, nursing and health science educators and educational institutions take a strong stance against the use of ChatGPT, positioning its use—like essay writing mills—as a direct threat to academic integrity. Such a stance would require the use of browser lockdowns and student oaths or agreements within a punitive lens or model founded on distrust and codified in more bureaucracy (e.g., in forms and declarations). Patrolling and enforcing such measures will be resource intensive and would, in great probability, be ineffective given the complexity of monitoring a large student body, and the possibilities that students may ingeniously find ways to bypass instituted lockdown measures. Further, students may respond with frustration to such restrictive measures that appear to undermine the larger proportion of students who choose to think and act ethically in their scholarly conduct—and truly own their personal ethics and commitment to academic integrity. This may seem antithetical to the adult educational model central to higher education and educational institutions. Moreover it draws attention to the importance of education and prevention in ethical conduct—focusing on helping students (especially those early in their studies) understand what scholarly integrity is and why it is important, not only for the credibility of their eventual qualifications but more widely for public safety and the public good. While the prohibition stance carries with it many of the same shortcomings and challenges encountered in the avoidance stance, such as the inability for students to learn from ChatGPT, there is similarly a risk that prohibition prevents educators from developing students' critical appraisal of ChatGPT's outputs. Specifically, as a powerful tool capable of synthesizing and integrating large and disparate volumes of web-based information, ChatGPT will both reflect and then reproduce and amplify extant biases and stereotypes in this literature. Students require guidance to identify that ChatGPT may replicate these biases, what these biases are, and to formulate their critiques. Such exercises could be powerful learning opportunities for students. Like an avoidance stance, a prohibition stance fundamentally neglects that patients—as consumers of health care—are likely to turn to ChatGPT for knowledge regarding their health concerns or questions. This may, like Internet searching, amplify the individual information seeking behaviours that occur in the absence of access to or support from nurses and other healthcare providers. Similar to Internet searching which is not in itself problematic, nurses and other health professionals must have a working knowledge of ChatGPT—including its shortcomings in providing health information—in order to effectively support and educate individuals within their care. By avoiding or prohibiting the use of ChatGPT in higher education, institutions are less capable of preparing future health professionals for this forthcoming reality. A third possible stance of nursing and the health professions is one of integration of ChatGPT into educational processes and assessments. We align with and advocate for this pragmatic and forward thinking choice—one that accepts the likelihood and inevitable ubiquitous use of ChatGPT in nursing and health science education (similar to the Internet), and leverages, rather than dismisses, its potential implications and harms. However, such integration requires educators to re-imagine assessment to emphasize process over end point, such as the essay as the terminal output for grading (Gleason, 2022). For instance, Gleason (2022) proposes that educators require students to generate a ChatGTP text and use this in comparison to the course readings and objectives to critically appraise the content. Given that ChatGPT can replicate biases present in online text, facilitating such critical appraisal can prove useful pedagogically in developing vital critical and reflections skills—rendering these skills to be more and not less valuable due to the ChatGPT platform. ChatGPT itself indicates that it is fundamentally up to students to abide by and uphold principles of academic integrity (OpenAI, 2023). Contrary to some literature emerging on this topic that stipulates plagiarism detection software is ineffective in detecting ChatGPT generated content (e.g., Gleason, 2022), ChatGPT itself indicates that software exists that is sensitive to ChatGPT generated text (OpenAI, 2023). A team of investigators in the United Kingdom investigated, running student short form essay responses through the plagiarism software Grammarly and TurnitIn with scores of 2% ± 1% and 7% ± 2%, respectively (Yeadon et al., 2022). Based on these scores and the favourable grading that these essays received, these authors concluded that ChatGPT is a major threat to fidelity in their academic context. It is clear that educational institutions must ensure academic integrity policies and detection software is in place, staff are sufficiently attuned and trained to ChatGPT including the current shortcomings and forthcoming advances in plagiarism software, to mitigate the risk of inappropriate use of ChatGPT. Seen as a valid platform that lacks an inherent hostility to academic and professional integrity, ChatGPT should be used—similar to the Internet—fairly and in a manner aligned with student's skill levels. The tool can support complexity, however, learning and applying the tool demands a skillset that must be learned. As such, an integration stance requires rapid adaptation of educational institutions to ensure appropriate staff training is provided on its ethical use and practical applications, comprehensive policies are in place to guide educator and student behaviour, and that timely and responsive risk assessment and mitigation plans are in place to revisit the use and integration of ChatGPT at frequent intervals, given the rapidity of its ongoing development. In conclusion, with the advent and remarkable uptake of ChatGPT, health professionals and educators now face an important decision. As we have inferred above, it is a decision fused for each of us, and collectively for our institutions, with a mix of emotion, projection and reaction. ChatGPT will touch up each—but it is what disciplines, institutions, and people do in response that matters most. Do we choose to see and handle ChatGPT as a tool that, despite its risks, can not only co-exist with but can also improve student learning when approached appropriately? Or do we regard this technology as oppositional and antithetical to learning and scholarly ethics, governed by fear of its misuse and lack of awareness of its potential? How learners engage with higher degree and professional training, and how evaluation can occur within higher education will all be transformed by the public availability of ChatGPT. Despite the possible desire towards a knee-jerk reaction prohibiting its use in education, a more tempered solution is likely the most viable: balance the privacy, security and academic integrity risks with the possible benefits of its application. This position exerts intention, agency, and influence over how ChatGPT will shape higher education (Archibald & Barnard, 2018). Even within an integrative model, educators must recognize the rapid advancements and forthcoming scalability of ChatGPT. The volume of input data used to train ChatGPT is continually advancing. New techniques are being developed to reduce bias in ChatGPT's modelling, which will contribute to the power and applicability of ChatGPT in the future—further heightening its human-like responses, its strengths and its risks. As such, ensuring a flexible process of assessment and resulting institutional approaches and policies will be paramount to keeping pace with the current and rapidly evolving state of this AI technology. In the words of Fasano and White (1982), “by identifying the possibilities (of the future), we can decide more wisely what we should do today to create a better world for tomorrow” (p. 20). All authors have agreed on the final version and meet at least one of the following criteria (recommended by the ICMJE*): (1) substantial contributions to conception and design, acquisition of data, or analysis and interpretation of data; (2) drafting the article or revising it critically for important intellectual content. This writing received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. No conflict of interest has been declared by the authors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call