Abstract
Generative artificial intelligence in general and ChatGPT in particular have risen in importance. ChatGPT is widely known and used increasingly as an information source for different topics, including science. It is therefore relevant to examine how ChatGPT portrays science and science-related issues. Research on this question is lacking, however. Hence, we simulate "interviews" with ChatGPT and reconstruct how it presents science, science communication, scientific misbehavior, and controversial scientific issues. Combining qualitative and quantitative content analysis, we find that, generally, ChatGPT portrays science largely as the STEM disciplines, in a positivist-empiricist way and a positive light. When comparing ChatGPT's responses to different simulated user profiles and responses from the GPT-3.5 and GPT-4 versions, we find similarities in that the scientific consensus on questions such as climate change, COVID-19 vaccinations, or astrology is consistently conveyed across them. Beyond these similarities in substance, however, pronounced differences are found in the personalization of responses to different user profiles and between GPT-3.5 and GPT-4.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Public understanding of science (Bristol, England)
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.