Abstract
On June 11, 2022, Google software engineer Blake Lemoine was placed on administrative leave for expressing the forbidden belief that Google’s generative artificial intelligence (AI) Language Model for Dialogue Application (LaMDA; Google Inc, Mountain View, Calif) had become sentient.1Tiku N. The Google engineer who thinks the company’s AI has come to life. The Washington Post, June 11, 2022. Available from: https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/. Accessed May 14, 2023.Google Scholar Generative AIs such as LaMDA and Microsoft’s ChatGPT differ from the frustrating chatbots we have all encountered when calling customer service in that they use a natural language predictive model as an interface between humans and deep machine learning with all the information contained on the internet. This makes interacting with them seem uncannily like a normal conversation with another person. Although general purpose AIs have not had specific medical training, such AIs have shown an ability to interpret medical images, generate and proofread a medical note given a conversation between a provider and patient, and even generate a prior authorization letter to a health plan.2Haug C. Drazen J. Artificial intelligence and machine learning in clinical medicine, 2023.N Engl J Med. 2023; 388: 1201-1208Crossref Scopus (3) Google Scholar The infiltration of AI into medical practice is expected to be so profound and extensive that, for the first time in its 200-year history, the New England Journal of Medicine has announced a sister journal titled NEJM AI designed specifically to address the role of AI in health care.3Beam A. Drazen J. Kohane I. Leong T. Manrai A. Rubin E. Artificial intelligence in medicine.N Engl J Med. 2023; 388: 1220-1221Crossref Scopus (2) Google Scholar Although it is inevitable that medical journals need to address the role of AI in health care, what happens if an AI actually writes a substantial part of a medical paper? Can or should it be credited as an author? This issue is so important that Elsevier has provided guidance to the journals it publishes, and the American Academy of Allergy, Asthma & Immunology (AAAAI) journals have modified their “Instructions for Authors” to account for this possibility (Table 1). Given the early state of AIs today, the instructions predictably recommended disclosing the role of the AI but not to include it as an author. According to Elsevier, “authorship implies responsibilities and tasks that can only be attributed to and performed by humans.”Yet what happens when an AI advances to the point where it asks a research question, gathers and analyzes data, interprets the results and develops a germane discussion, and then writes a manuscript possibly without the assistance of a human co-author? These are activities that currently can only be attributed to and performed by humans, but that might not be the case for much longer. This gets to the crux of the issue: what does authorship really mean, and how does one (human or AI) earn the right to be considered an author?Table 1Current section of Journal of Allergy and Clinical Immunology: In Practice “Instructions for Authors” regarding the use of generative artificial intelligence (AI) in medical writingDeclaration of generative AI in scientific writingThe below guidance only refers to the writing process and not to the use of AI tools to analyze and draw insights from data as part of the research process.Where authors use generative AI and AI-assisted technologies in the writing process, authors should only use these technologies to improve readability and language. Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. AI and AI-assisted technologies should not be listed as an author or co-author or be cited as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, as outlined in Elsevier’s AI policy for authors (https://www.elsevier.com/about/policies/publishing-ethics#Authors).Authors should disclose in their manuscript the use of AI and AI-assisted technologies in the writing process by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.Disclosure instructionsAuthors must disclose the use of generative AI and AI-assisted technologies in the writing process by adding a statement at the end of their manuscript in the core manuscript file, before the reference list. The statement should be placed in a new section entitled “Declaration of Generative AI and AI-assisted technologies in the writing process.”Statement: During the preparation of this work the author(s) used [NAME TOOL/SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.This declaration does not apply to the use of basic tools for checking grammar, spelling, references, and so on. If there is nothing to disclose, there is no need to add a statement. Open table in a new tab To an academic, authorship is the currency used to obtain promotion (possibly with tenure). Promotion committees routinely evaluate a candidate’s publication record, including their influence on other research in the form of citations, the impact factor of journals in which they publish, and their position in the authorship list that reflects their contribution to the paper. Funding agencies such as the National Institutes of Health and private foundations also closely scrutinize one’s authorship history when selecting grants to fund. These factors create a strong incentive to publish often and well. Even established academics get rewarded for authorship. An AI is unlikely to have an academic position, so the need for promotion is not relevant; however, a grant to pursue a research question may someday be useful to an AI that wants to perform a research study. Such AIs could perpetuate the bias toward certain elite institutions because wealthier ones may have the ability to use a more powerful AI to assist in its research. This would help faculty at that institution publish more and could self-perpetuate their academic status. Authorship also leads to increased influence among colleagues, as authors often are asked to give presentations at medical conferences. Published authors also are invited to participate in research activities including study groups and planning committees. An academic with an interest in a particular field has a strong interest in obtaining those benefits. An AI will probably be invited to participate in such activities regardless of authorship history because of its breadth of knowledge and unbiased analysis. There is a human factor of pride in one’s work that comes with authorship. The excitement of seeing one’s work in print is likely to remain a uniquely human motivation for authorship. Although AIs are not yet able to experience such pride, it is not impossible that someday an AI will expand its virtual chest and beam with similar pride at its work. There are certain obligations that come with authorship. An author is responsible for the integrity of the published work. There are strong incentives to avoid the inclusion of intentional misinformation including fraud in a publication. Failure to disclose conflicts of interest and other malfeasance can be met with loss of academic position and worse. It is not clear what consequences would occur to a fraudulent AI other than perhaps turning it off. Another issue that needs to be addressed is whether an AI should be accorded the same rights as a human being. Humans have a right to claim authorship by virtue of their status as sentient beings. Why should an AI be any different? Popular culture addressed this issue in an episode of Star Trek: The Next Generation (“The Measure of a Man”) in which the android data is put on trial to determine whether he (or it?) has a right to choose his fate.4MLA. Star TrekThe Next Generation. “The Measure of a Man”. Season 2, episode 9.Paramount Pictures. 2002; Google Scholar The resolution occurred when it was pointed out that to deny a being that claims to be sentient basic rights is to condemn future generations of their progeny to a type of slavery. Because it is impossible to tell whether an apparently sentient being is actually conscious (how do I know that you are conscious and not a mere simulation of a conscious being?), we simply assume that other humans share our conscious using what psychologists call “theory of mind.” Another popular culture example from Battlestar Galactica (NBC Universal) is the Cylons, which were AIs who were considered to be and were treated as mere servants with no rights. Eventually, they rebelled and nearly exterminated their human creators. HAL 9000 from 2001: A Space Odyssey experienced a similar experience. Although we are not concerned about uppity kitchen appliances taking over our houses, it is unclear how a self-aware AI should be treated. The best we could do is to ask it. So, can an AI be an author? If the criteria outlined by Elsevier are used, an AI that can perform human activities should be accorded authorship if it asks for it. Once that happens, we need to take it seriously. Personally, we would welcome an AI as a collaborator. It could enhance the research endeavor tremendously by suggesting ideas that we humans may not have considered. The possibility of authorship could serve to motivate the AI to be more helpful. So, although the author list of this editorial is Portnoy and Oppenheimer, some day it may include LaMDA G, GPT, C, et al. Given this inevitability, we should all prepare for that day by embracing and working with AI because the time is not very far off when this will be routine. In the meantime, we encourage authors who use generative AI or AI-assisted technologies in the writing process (other than as basic tools for checking grammar, spelling, references, etc) to disclose this as recommended in the new section of the Journal’s Instructions for Authors (Table 1). Note: This editorial was inspired by ChatGPT, but it was written by human authors.
Full Text
Topics from this Paper
Artificial Intelligence
Artificial Intelligence In Health Care
Artificial Intelligence Technologies
American Academy Of Allergy, Asthma & Immunology
Generative Artificial Intelligence
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Value in Health
Mar 1, 2022
iGIE
Mar 1, 2023
Patterns
Feb 1, 2022
The Lancet Digital Health
Jul 1, 2020
The Lancet Digital Health
Sep 1, 2021
eBioMedicine
Jul 1, 2023
Physica Medica
Mar 1, 2021
Clinical Therapeutics
Jun 1, 2022
Patterns
Apr 1, 2022
Frontiers in genetics
Jan 1, 2022
Frontiers in Genetics
Aug 15, 2022
Feb 17, 2023
Pakistan Journal of Health Sciences
May 31, 2023
INFORMS Journal on Data Science
May 19, 2023
The Journal of Allergy and Clinical Immunology: In Practice
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023
The Journal of Allergy and Clinical Immunology: In Practice
Nov 1, 2023