Ethical guidelines for the use of generative artificial intelligence and artificial intelligence-assisted tools in scholarly publishing: a thematic analysis
Purpose: This analysis aims to propose guidelines for artificial intelligence (AI) research ethics in scientific publications, intending to inform publishers and academic institutional policies in order to guide them toward a coherent and consistent approach to AI research ethics.Methods: A literature-based thematic analysis was conducted. The study reviewed the publication policies of the top 10 journal publishers addressing the use of AI in scholarly publications as of October 2024. Thematic analysis using Atlas.ti identified themes and subthemes across the documents, which were consolidated into proposed research ethics guidelines for using generative AI and AI-assisted tools in scholarly publications.Results: The analysis revealed inconsistencies among publishers’ policies on AI use in research and publications. AI-assisted tools for grammar and formatting are generally accepted, but positions vary regarding generative AI tools used in pre-writing and research methods. Key themes identified include author accountability, human oversight, recognized and unrecognized uses of AI tools, and the necessity for transparency in disclosing AI usage. All publishers agree that AI tools cannot be listed as authors. Concerns involve biases, quality and reliability issues, compliance with intellectual property rights, and limitations of AI detection tools.Conclusion: The article highlights the significant knowledge gap and inconsistencies in guidelines for AI use in scientific research. There is an urgent need for unified ethical standards, and guidelines are proposed for distinguishing between the accepted use of AI-assisted tools and the cautious use of generative AI tools.
- Research Article
1
- 10.12688/mep.20554.1
- Oct 23, 2024
- MedEdPublish
Background ChatGPT is an open-source large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.3
- Apr 28, 2025
- MedEdPublish
Background ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. Methods We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants’ socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. Results We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34–50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). Conclusion The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
- 10.12688/mep.20554.2
- Jan 23, 2025
- MedEdPublish (2016)
ChatGPT is a large language model that uses deep learning techniques to generate human-like texts. ChatGPT has the potential to revolutionize medical education as it acts as an interactive virtual tutor and personalized learning assistant. We assessed the use of ChatGPT and other Artificial Intelligence (AI) tools among medical faculty in Uganda. We conducted a descriptive cross-sectional study among medical faculty at four public universities in Uganda from November to December 2023. Participants were recruited consecutively. We used a semi-structured questionnaire to collect data on participants' socio-demographics and the use of AI tools such as ChatGPT. Our outcome variable was the use of ChatGPT and other AI tools. Data were analyzed in Stata version 17.0. We recruited 224 medical faculty, majority [75% (167/224)] were male. The median age (interquartile range) was 41 years (34-50). Almost all medical faculty [90% (202/224)] had ever heard of AI tools such as ChatGPT. Over 63% (120/224) of faculty had ever used AI tools. The most commonly used AI tools were ChatGPT (56.3%) and Quill Bot (7.1%). Fifty-six faculty use AI tools for research writing, 37 for summarizing information, 28 for proofreading work, and 28 for setting exams or assignments. Forty faculty use AI tools for non-academic purposes like recreation and learning new skills. Faculty older than 50 years were 40% less likely to use AI tools compared to those aged 24 to 35 years (Adjusted Prevalence Ratio (aPR):0.60; 95% Confidence Interval (CI): [0.45, 0.80]). The use of ChatGPT and other AI tools was high among medical faculty in Uganda. Older faculty (>50 years) were less likely to use AI tools compared to younger faculty. Training on AI use in education, formal policies, and guidelines are needed to adequately prepare medical faculty for the integration of AI in medical education.
- Research Article
31
- 10.5204/mcj.3004
- Oct 2, 2023
- M/C Journal
during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (see The Effect of Open Access).
- Research Article
- 10.3390/educsci15040461
- Apr 8, 2025
- Education Sciences
This survey study aims to understand how college students use and perceive artificial intelligence (AI) tools in the United Arab Emirates (UAE). It reports students’ use, perceived motivations, and ethical concerns and how these variables are interrelated. Responses (n = 822) were collected from seven universities in five UAE emirates. The findings show widespread use of AI tools (79.6%), with various factors affecting students’ perceptions about AI tools. Students also raised concerns about the lack of guidance on using AI tools. Furthermore, mediation analyses revealed the underlining psychological mechanisms pertaining to AI tool adoption: perceived benefits fully mediated the relationship between AI knowledge and usefulness perceptions, peer pressure mediated the relationship between academic stress and AI adoption intent, and ethical concerns fully mediated the relationship between ethical perceptions and support for institutional AI regulations. The findings of this study provide implications for the opportunities and challenges posed by AI tools in higher education. This study is one of the first to provide empirical insights into UAE college students’ use of AI tools, examining mediation models to explore the complexity of their motivations, ethical concerns, and institutional guidance. Ultimately, this study offers empirical data to higher education institutions and policymakers on student perspectives of AI tools in the UAE.
- Front Matter
10
- 10.1016/j.jval.2021.12.009
- Jan 31, 2022
- Value in Health
The Value of Artificial Intelligence for Healthcare Decision Making—Lessons Learned
- Research Article
1
- 10.1108/oth-10-2024-0066
- Jan 28, 2025
- On the Horizon: The International Journal of Learning Futures
PurposeThis study aims to evaluate students’ intention and actual use (AU) of artificial intelligence (AI) tools’ to discover how the power of AI influences learning and academic success.Design/methodology/approachThis paper used the unified theory of acceptance and use of technology (UTAUT) to develop a structural equation model (SEM) and used convenience sampling to measure 304 students’ five-point Likert scale responses. The model was tested with AMOS-24 and SPSS-25, and the study found that AI boosted students’ learning experiences and explain importance of AI skills and knowledge.FindingsPerformance expectancy (PE), effort expectancy (EE), social influence and facilitating condition directly and indirectly affect AU via intent to use (IU), while subjective norms determining the use of AI tools’ and have no substantial influence. Attitude (ATT) moderates PE and EE, although the data show that ATT has no substantial effect on EE.Originality/valueThese insights may help student to understand how AI tools’ benefit them and what factors affect their utilization. When correctly designed and executed, UTAUT provides an appropriate integrated theoretical framework for robust statistical analysis like SEM.
- Research Article
- 10.36128/priw.vi56.896
- Jul 8, 2025
- LAW & SOCIAL BONDS
The use of artificial intelligence (AI) tools in higher education has become increasingly important because of the time and effort savings and the speed of information transfer. However, many ethical and legal challenges make their use in this field a complex issue. Problems such as bias and discrimination that arise from AI Tools require the establishment of a legal system capable of controlling their use in an optimal manner. However, the legal regulation of the use of AI Tools in higher education, especially in the fields of research and data analysis, does not reach the required level. Although many countries have begun to use these tools in higher education and scientific research, the legal framework is still not at the required level. This research attempts to explore the legal and ethical challenges of using AI in higher education and scientific research with the aim of focusing on the importance of developing a legal framework capable of promoting the use of AI Tools in the scientific and educational sectors. The paper highlights the most important relevant laws in technologically advanced countries in general to measure the extent to which they are reflected in reality.
- Research Article
- 10.59490/dgo.2025.1060
- Jun 30, 2025
- Conference on Digital Government Research
This paper aims to answer three main questions regarding the use of artificial intelligence (AI) tools in the Colombian judiciary. First, what type of AI tools do judges and judicial staff in Colombia access and use? Second, how and for what purposes are these AI tools used? Third, do demographic factors (e.g., age, gender) influence how judges and judicial staff approach AI tools? This paper is based on three comprehensive surveys conducted in 2024. Two surveys conducted by the authors targeted participants in the course ”Artificial Intelligence for the Administration of Justice: Fundamentals, Applications, and Best Practices”, offered by the Universidad de los Andes and the Superior Council of the Judiciary (CSdJ). A total of 1,391 judicial staff members responded at the start of the course, and 824 responded at its conclusion. A third survey, conducted later by the CSdJ, gathered responses from 3,152 judicial personnel. Our analysis reveals that training significantly improved AI familiarity among judicial personnel—initially, 63% reported minimal knowledge, but after the 50-hour course, 85% claimed moderate to high familiarity. While approximately one-third of respondents initially used AI for work tasks, this increased to nearly half post-training. Over 80% of users accessed free AI versions, raising concerns about confidentiality as these platforms may share information with third parties. Judicial officials primarily employ generative AI for information searches and document writing, particularly for jurisprudence (59%), legislation (52%), and definitions (51%). This reliance on AI for information retrieval presents risks if outputs aren’t verified against reliable sources. Although age and gender disparities in AI familiarity exist, reported usage patterns show minimal demographic differences. These findings emphasize the importance of enhancing digital literacy among judicial professionals and inform our recommendations for developing appropriate regulations and guidelines governing AI systems in the justice sector.
- Research Article
11
- 10.31703/gssr.2023(viii-ii).19
- Jun 30, 2023
- Global Social Sciences Review
This research paper presents an insightful investigation into the perceptions and ethical considerations of students regarding the use of Artificial Intelligence (AI) tools in academia, particularly focusing on the University of Limerick in Ireland. Herein, AI tools like OpenAI's ChatGPT have emerged as valuable assets in promoting interactive learning and enhancing student engagement. Thus, this research aimed to explore the privacy and ethical considerations students have regarding the use of AI tools in education. Using a quantitative methodological approach, the study solicited the attitudes, opinions, and patterns of students towards AI utilities. The study revealed intriguing perspectives on data privacy concerns associated with AI tools. Students from technology and science-focused schools displayed a higher degree of concern, suggesting their deeper understanding of potential privacy implications. Conversely, students from arts, humanities, and social sciences, and law politics & public administration displayed slightly lower levels of concern.
- Research Article
13
- 10.1177/27526461231215083
- Nov 11, 2023
- Equity in Education & Society
Students’ use of Artificial Intelligence tools to complete assignments spawns issues in academic integrity. The purpose of this study was to explore students’ and faculty’s perspectives on the benefits and challenges of using ChatGPT and assistive Artificial intelligence (AI) tools to complete assignments. This descriptive phenomenological qualitative methodology study encompassed interviews with eight students who used Large Language Models (LLMs) AI tools to complete their assignments and nine students who did not. It also contains interviews with six Faculty and their perspectives on students’ use of Large Language Models (LLMs) AI tools to complete their assignments and their thoughts on the benefits and challenges. The participants were purposively selected. The data were coded based on Braun and Clarke’s (2013) six steps in thematic analysis. Descriptive, in vivo, and evaluative coding were used. Additionally, data were examined semantically and latently using reductionist analysis to determine the final themes. Five components of the Unified Theory of Acceptance and Use of Technology (UTTAUT) were applied to the data collected and provided the framework for the study. Behavioural intention served as the foundation. Effort and Performance Expectancies, and facilitating conditions were exemplified in participants’ responses about the use of ChatGPT, Grammarly, and other AI assistive tools, plagiarism/academic integrity, and social influence were indicated when participants (both Students and Faculty) suggested the need for the development of policies and procedures toward the appropriate use of AI tools. Effort and performance expectancies and habits were found in the data collected in the form of consideration of the pros of using AI tools such as ChatGPT and assistive tools. These include the time saved by generating information, examples for both students and Faculty, and help in the teaching/learning process, and one participant found that it motivated her. The cons cited were students’ lack of creativity and the inability to think critically, the cost of the AI assistive tools (related to the component Price), the bandwidth needed to use them, the digital divide, and the false information generated. This study has significance for the use of ChatGPT and assistive AI tools in education and the ethical implications. It is recommended that specific policies be established and enacted to ensure the appropriate use of assistive and Artificial Intelligence (LLMs) tools.
- Research Article
- 10.37394/23209.2024.21.41
- Oct 9, 2024
- WSEAS TRANSACTIONS ON INFORMATION SCIENCE AND APPLICATIONS
The research aims to study the impact of the use of artificial intelligence (AI) tools in higher education institutions (HEIs) on building professional competencies of future art specialists. The research employed quantitative and qualitative methods (in particular, modeling methods, pedagogical experiments, and survey of respondents to assess the impact of AI tools on building professional competencies). The author’s definition of the concept of “professional competencies of art specialists” is proposed. Targeted tools were selected and used for building components of professional competencies. For example, VocalAnalysis AI tools were used to form the perceptual component — for students majoring in Musical Art; Art Vision AI — for students majoring in Fine Arts; ChoreoVision AI — for students majoring in Choreography. The results of the study show that students rated their level of ability to use AI as higher than medium. The questionnaire designed to study the impact of the use of AI on building professional competencies of future specialists in art majors, demonstrated a high level of agreement between the assessment of the impact of the use of AI tools on the formation of various components of professional competencies. Further research can be aimed at the development and testing of an algorithm for objective expert evaluation of specific AI tools for the implementation of art projects by students of the specified art majors.
- Conference Article
- 10.56059/pcf11.2844
- Sep 1, 2025
Integrating artificial intelligence (AI) technologies into academic writing presents opportunities and challenges, as use of AI promotes efficiency in knowledge production but also exacerbates existing disparities between developed and developing nations, with developing nations lagging behind in terms of access and the expertise for using them productively. Moreover, while AI tools provide support for innovative research, writing assistance, and data analysis, there are growing concerns about the quality of postgraduate student theses and publications, written using AI tools, as they contain false statements and violate research ethics. Based on survey data and secondary literature, this study investigated the current application of AI tools in academic writing by Humanities and Social Science postgraduate students in a Kenyan university. The study aimed to: establish the uptake and use of AI tools in academic writing, assess student attitudes on using AI tools in academic writing; and explore challenges influencing the use of AI tools in academic writing. The study revealed that students often use AI tools that generate content, provide analysis and paraphrase texts, such as chatGPT, Gemini Grammarly and Quillbot. Students had positive attitudes to adopting AI tools for academic writing, however, they reported limitations in proficient use of AI tools, poor writing and research skills, along with lack of ICT resources and AI tools. It is recommended that for effective integration of AI tools in academic writings, higher education institutions should provide ICT resources, AI tools and offer training in research skills, writing skills and use of AI tools for writing.
- Research Article
65
- 10.1111/nin.12556
- Apr 26, 2023
- Nursing Inquiry
Will ChatGPT undermine ethical values in nursing education, research, and practice?
- Research Article
- 10.17483/qc1kc694
- Jun 30, 2025
- Quality Advancement in Nursing Education - Avancées en formation infirmière
Introduction: The availability and use of artificial intelligence (AI) tools is accelerating significantly. As these technologies proliferate, many post-secondary institutions have responded by banning students from using AI tools such as ChatGPT and framing the conversation as breaches of academic integrity. Background: Despite these institutional responses, many students adopt these tools as part of their learning journey. In health care settings, the adoption of such tools in the context of patient care provision is a reality. Consequently, there is a relevant pedagogical opportunity to examine how such tools inform the experiential learning of nursing students and their future practice. Methods: To address the dearth of information regarding nursing students’ perceptions of using AI tools, a Canadian university teaching team incorporated ChatGPT into an undergraduate nursing course assignment. A pilot quasi-experimental pre-post-test survey design was employed to examine student perceptions of using ChatGPT. After obtaining institutional ethics approval, a neutral third party collected the anonymous data. Findings: Pilot study results highlighted significant student concerns regarding the ethics of using AI tools. Additionally, students described such tools as meaningful avenues to support learning access and equity. Finally, students identified a high probability of use of AI tools in their future practice, suggesting that exposure and support during learning can positively influence responses to these tools in practice settings. Conclusion: The students surveyed are now practising nurses; thus, findings may provide insight into perceptions of new nurses regarding the integration of AI to support competencies required by the nurses of tomorrow.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.