Abstract

Adoption of pre-trained large language models (LLMs) across an increasingly diverse range of tasks and domains poses significant problems for authorial attribution and other basic knowl­edge organization practices. Utilizing methods from value-sensitive design, this paper examines the theoretical, practical, and ethical issues introduced by LLMs and describes how their use challenges the supposedly firm boundaries separating specific works and creators. Focusing on the implications of LLM usage for higher education, we use hypothetical value scenarios and stakeholder analysis to weigh the pedagogical risks and benefits of LLM usage, assessing the consequences of their use on and beyond college campuses. While acknowledging the unique challenges presented by this emerging educational trend, we ultimately argue that the issues associated with these novel tools are indicative of preexisting limitations within standard entity-relationship models, not wholly new issues ushered in by the advent of a relatively young technology. We contend that LLM-generated texts largely exacerbate, rather than invent from scratch, the preexisting faults that have frequently posed problems to those seeking to determine, ascribe, and regulate authorship attributions. As the growing popularity of generative AI raises concerns about plagiarism, academic integrity, and intellectual property, we advocate for a reevaluation of reductive work-creator associations and encourage the adoption of more expansive authorial concepts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call