Abstract

ChatGPT's role in medical writing is a topic of discussion. I experimented whether ChatGPT almost automatically generates Correspondence or Letter addressed to a "translated" article, and thereby wish to arouse discussion regarding ChatGPT use in medical writing. I input an English article of mine into ChatGPT, tasking it with generating an English Disagreement Letter (Letter 1). Next, I tasked ChatGPT with translating the manuscript addressed to from English-French-Spanish-German. Then, I once again tasked ChatGPT with generating an English Disagreement Letter addressed to a German manuscript (triplicate translated manuscript) (Letter 2). Letters 1 and 2 are readable and reasonable, shooting the point that the author (myself) felt as the weakness of the article. Letters addressed to French (single translation) and to Spanish (double translation) and longer Letters (corresponding to Letters 1 and 2) are also readable, and thus stand. Solely based on this experiment, one may be able to write a letter even without understanding the meaning of the paper being addressed, let alone the language of the paper. Although this humble experiment does not conclude anything, I plea for a comprehensive discussion on the implications of these findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call