Abstract

In 2023 the scholarly communities are witnessing a spring of Artificial Intelligence (AI) powered tools for scientific work. Scholars are tempted to integrate various time-saving AI applications in their workflow, from data analysis to disseminating research results. Among these novel “research assistants”, several enhanced discovery services apply machine learning to identify the most relevant results for the information seeker and visualize them to the user in innovative ways. The rapid emergence of these tools has raised concerns about the impact of AI technology on scientific research and led to requirements of transparency, accountability, and explainability of the new AI tools. From the systems viewpoint, responsibility for the impact of technology extends beyond developers to the broader society. The user communities, including librarians providing services for academia, are considered counterparts in the effects of AI technology systems. Individuals decide how they behave with the new information technology, for example, whether they trust the system and its outcome. Thus, an individual user is also part of the socio-technical evolution of building transparent, accountable, and explainable AI. In this study, we explore the challenges of adopting AI tools in scientific research on the level of an individual librarian working for academia. We aim to detect poorly addressed mindsets around explainability, fairness, and privacy, named “blind spots” in AI ethics (Hagendorff, 2022). The goal is to understand the “determinants” of librarians’ information behavior with novel AI tools. We focus on two AI-powered visual discovery services: openknowledgemaps.org and www.litmaps.com. These tools help users to navigate and analyze research articles as concept graphs. In this poster, our primary research question is: What are the determinants of librarians’ intentions when they adopt/use new AI-powered tools? We conducted an expert evaluation (Tessmer, 1993) on these two discovery services using the Theory of Planned Behavior (TPB) as a theoretical framework that explains human behavior through three individual beliefs: attitudes, norms, and control. This framework helped us detect new “blind spots” in the behavioral determinants that have remained unnoticed in the recent discourses about AI ethics in libraries. Our study indicated a lack in the area of normative beliefs, a “blind spot”: The social pressure to quickly adopt the newest technology and the lack of library-specific norms for using AI in academia may become a handicap for an individual librarian who contemplates whether or not to use an AI tool.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call