Abstract

Generative artificial intelligence (GenAI) is a category of AI technology capable of producing various types of content, including text, images, audio, video, 3D models, simulations and synthetic data. Although it has been present for some time, it has been popularised in the recent months due to text and image GenAI tools, such as ChatGPT, Google Bard, LaMDA, BlenderBot, DALL-E, Midjourney, Stable Diffusion, some of which have already received new and upgraded versions.
 The main issue for the scholarly research and publications relates to the fact that because of the technology breakthrough the AI tools, based on machine learning models and usually fed with large volumes of data, no longer only assist researchers in recognising patterns and predicting, but also in generating content. This raises questions such as: On a general level, is it acceptable to use the generated content in academic publications? Does the use of such tools in research and publications violate academic honesty? Is a researcher violating another person’s intellectual property right when using these tools?
 This paper seeks to answer these questions with the aim of suggesting whether and to what extent there is a need for GenAI to be regulated within the academic institutions or beyond. Additionaly, the paper is aimed at investigating the models for such regulation as there are already some attempts to do so at various academic institutions in the world and many such processes are ongoing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call