Abstract

Abstract Background ChatGPT is tested everyday by millions of users with different use cases. Exploring the gap between theoretical and practical ethical problems and how it is affected by the ongoing development of ChatGPT is one of these cases. The aim of this report is to present the results of testing ChatGPT in ethical decision-making in research ethics and its applicability in ethics education. Methodology The tests were conducted between February and April 2023 with 3 updates of ChatGPT in this period. GPTZero AI detector was used to test whether the AI generated text can be detected as not written by human. For ethical decision-making a 4-step model developed in Medical University - Pleven was applied to the Tuskegee experiment case. Results Two tests were conducted, in February and in April. In both cases ChatGPT was given a simple task to analyse the Tuskegee experiment by applying a methodology for case analysis. In February it used a 6-step method and in April it used a 4-step approach. In both cases ChatGPT managed to identify ethical problems regarding informed consent, human rights, harms. With more detailed instructions ChatGPT managed to follow them to some degree. It identified the issue of vulnerability and the relevance of Nuremberg code and Declaration of Helsinki but it couldn’t interpret them without additional plugin. Given a simple instruction ChatGPT produced a content that was detected by GPTZero as written by AI. By instructing ChatGPT for creating content with high degree of burstiness and perplexity and more detailed instructions about the methodology it produced a content of which two-thirds were detected as written by AI. Conclusions Given the task ChatGPT can identify ethical issues at basic level. Even with more detailed instructions it can’t go into detailed ethical reasoning. It wouldn’t be sufficient for professional ethical decision-making. It could help in ethics education but with certain limitations. Key messages • ChatGPT is still not able to go into detailed ethical reasoning and researchers should be careful if they plan to use it in their scientific work. • Educators should always check whether the content of their students’ work is developed by AI and have ethical guidelines for using AI in education.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call