Abstract

The authors analyze the ability of ChatGPT to generate effective instructions for a consequential task: taking a COVID-19 test. They compare the output from a commercial prompt for generating these instructions to those provided by the test manufacturer. They also analyze the input, the prompt itself, to address prompt-engineering issues. The results show that although the output from ChatGPT exhibits certain conventions for documentation, the human-authored instructions from the manufacturer are superior in most ways. The authors conclude that when it comes to creating high-quality, consequential instructions, ChatGPT might be better seen as a collaborator than a competitor with human technical communicators.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call