"Although statistical language modeling has been a fairly active area of academic research in artificial intelligence (AI) since the 1980s, the recent explosion of both technological innovation and public attention around LLMs has dramatically changed the space of applications for this work. The unprecedented scale of these new models carries with it emergent capabilities—particularly in the form of generative models—that have made the technology generalizable to a broad range of use cases and accessible to the general public. Yet these rapidly developing features also pose novel risks, ranging from environmental impacts and copyright violations to misinformation and hate speech. Traditional regulatory frameworks are not well-equipped to match either the rapid pace of innovation or the ability of LLMs to transcend the jurisdictions of individual governing bodies. Thus far, the governance gap has been filled by soft law frameworks such as private company standards, voluntary codes of conduct, and design guides from non-regulatory standard-setting organizations. Still, there is rising demand to solidify these soft guidelines into hard law. In addition to offering a historical overview of LLMs and highlighting some of the most pressing concerns involving LLMs today, we discuss legislative attempts to address these concerns and outline potential complicating factors."
Read full abstract