Abstract

The rapid progress of Artificial intelligence in generative modeling is marred by widespread misuse. In response, researchers turn to use-based restrictions—contractual terms prohibiting certain uses—as a “solution” for abuse. While these restrictions can be beneficial to artificial intelligence governance in API-gated settings, their failings are especially significant in open-source models: not only do they lack any means of enforcement, but they also perpetuate the current proliferation of tokenistic efforts toward ethical artificial intelligence. This observation echoes growing literature that points to useless efforts in “AI ethics,” and underscores the need to shift from this paradigm. This article provides an overview of these drawbacks and argues that researchers should divert their efforts to studying deployable, effective, and theoretically grounded solutions like watermarking and model alignment from human feedback to effect tangible changes in the current climate of artificial intelligence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call