Abstract

The introduction of ChatGPT 3.5 in November 2022 ignited a sensation in the academic community, leaving many astounded by its capabilities. This new release more closely emulates human responses than its predecessors. Among its remarkable capabilities, it can answer questions, catalog items in MARC21, recommend reading lists, and make suggestions on a wide array of topics. To assess ChatGPT’s efficacy in aiding library users, the authors of this paper conducted an experiment comparing ChatGPT’s performance with that of librarians in answering reference questions. Thirty questions were randomly selected from the transaction log of the reference inquiries between June 1, 2023 to July 31, 2023 at the Rider University Libraries. These queries constituted 34% of the total user questions during this two-month period. The authors compared the answers by ChatGPT and those by reference librarians for their accuracy, relevance, and friendliness. The findings indicate that reference librarians markedly outperformed their robotic counterpart. An evident issue arises from ChatGPT’s deficiency in understanding local policies and practices. This consequently hinders its ability to provide satisfactory answers in those areas. OpenAI posits that ChatGPT’s proficiency can be enhanced through targeted fine-tuning using locally specific information. At the moment, ChatGPT remains a great tool for librarians.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call