Abstract

AbstractUsing a repository of historical student responses to an actual course‐assigned essay prompt and a series of artificial intelligence (AI)‐generated responses to the same prompt, we conduct a single‐blind, randomized experiment to evaluate the performance of AI in agricultural and applied economics education. Further, we assess instructors' ability to detect the use of AI. We find that AI‐generated responses to the essay received statistically significantly higher scores than those of the average student. Instructors who had previous exposure to dialog‐based AI were 13 times more likely to accurately detect AI‐generated essays than instructors without previous exposure to the technology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call