Abstract

The emergence and proliferation of artificial intelligence (AI) tools has left the realm of science fiction and the province of computer science research and has reached the everyday activities of academics and various support staff. AI promises to automate and facilitate a range of research tasks and increase scientific productivity, and can thus be expected to raise new questions and dilemmas that might challenge the systems of accountability currently in place to safeguard academic integrity.
 This poster presents preliminary results of an analysis of seven prominent ethical guidelines (international and Norwegian): the Vancouver protocol, ALLEA guidelines, NENT and NESH guidelines, professional codes of ethics from IFLA, ALA, and the Norwegian Union of Librarians. EBLIDA and LIBER were consulted, but do not offer their own ethical guidelines. The main research question is: to what extent do current ethical guidelines support researchers and librarians in dealing with ethical questions brought about by the proliferation of new AI tools?
 The concern that emergent technologies tend to challenge values, norms, and practices in academia is not new. For example, ALLEA (2017) acknowledges that the values and principles they lay out are “affected by social, political or technological developments and by changes in the research environment” (p.3). Nonetheless, only the Vancouver protocol (updated in May 2023) provides explicit recommendations on AI; it essentially prescribes that authors/reviewers disclose if and how they used AI tools. The other documents, ranging from 2008 to 2022, mention neither AI nor the possibility of automation technologies in academic/library work.
 Despite the absence of advice on AI, the analysis revealed interesting issues on ethical guidelines and emergent technologies. ALA (2017) makes extensive recommendations about social media, both as a tool for libraries’ own work and as a demand from patrons who require their expertise. Similar needs for new competencies and responsibilities can be expected from AI. Also, the Norwegian Union of Librarians (2008) encourages the adoption of free software, open standards, and open source codes; this can gain new momentum with the emergence of proprietary tools and algorithms, particularly if they are trained on public data curated by i.a. libraries.
 Ethical guidelines are general; they state a commitment to certain values and how they should guide certain tasks and practices. They do not prescribe how concrete tools should be employed. In this regard, the current ethical guidelines do offer a sound basis upon which new ethical questions can be assessed. Yet, unlike other types of tools employed in research and library work, AI poses challenges to things that are taken for granted by ethical guidelines, such as what constitutes information, or whether non-human entities can be considered authors, sources, or neither.
 In conclusion, it might be beneficial to revise ethical guidelines, less so because AI requires concrete recommendations, and more because they challenge substantive assumptions upon which guidelines rely. Also, new possibilities afforded by AI might put pressure on certain values, such as reproducibility and academic craftsmanship. Assuming that academia/library communities consider these important to preserve, it might be beneficial to reaffirm these values in light of the changing landscape.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call