Abstract

ABSTRACT Generative AI models, with their enhanced capacity for conversation, will soon find widespread applications in qualitative research, especially in the disciplines of social science and public policy. Although researchers guarantee the confidentiality of the data, the tools they use for data analysis are largely their choice and remain unregulated, raising serious ethical concerns. Prior research has established the potentially hazardous effects of such transformative architecture on research integrity and ethics; however, the interventions required to alleviate the risks that impact the 3Rs – Reviewers, Researchers, and Research Respondents – have not yet been studied. Initially, we analysed the potential risks associated with Large Language Models (such as GPTs) by examining scientific publications. We then had a ‘risk workshop’ with four qualitative researchers, followed by open-ended interviews with seven individuals from the 3 R impact groups to develop the various risk scenarios. We compare these risks to the AI-related policies of the European Union, Singapore, the United States, the United Kingdom and China to identify regulatory gaps. The research output illustrates potential regulatory interventions on a continuum, with nodality-based soft laws at one end and more extensive regulatory interventions (hard laws) at the other for various LLM applications in qualitative research.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.