Abstract

Objective: This case study sought to provide early information on the accuracy and relevance of selected GPT-based product responses to basic information queries, such as might be asked in librarian research consultations. We intended to identify positive possibilities, limitations, and ethical issues associated with using these tools in research consultations and teaching.Methods: A case simulation examined the responses of GPT-based products to a basic set of questions on a topic relevant to social work students. The four chatbots (ChatGPT-3.5, ChatGPT-4, Bard, and Perplexity) were given identical question prompts, and responses were assessed for relevance and accuracy. The simulation was supplemented by reviewing actual user exchanges with ChatGPT-3.5 using a ShareGPT file containing conversations with early users.Results: Each product provided relevant information to queries, but the nature and quality of information and the formatting sophistication varied substantially. There were troubling accuracy issues with some responses, including inaccurate or non-existent references. The only paid product examined (ChatGPT-4), generally provided the highest quality information, which raises equitable access to quality technology concerns. Examination of ShareGPT conversations also raised issues regarding ethical use of chatbots to complete course assignments, dissertation designs, and other research products.Conclusions: We conclude that these new tools offer significant potential to enhance learning if well-employed. However, their use is fraught with ethical challenges. Librarians must work closely with instructors, patrons, and administrators to assure that the potential is realized while ethical values are safeguarded.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call