Abstract

This article deals with the complex relationship between human intelligence and so-called artificial intelligence in the context of an ongoing project to develop a writing assistant for Spanish learners, both native and non-native. The authors have used ChatGPT to generate validation data to assess the performance of the language model on different parameters before testing it with real users. The article describes how they approached the generation of validation data, what they learned along the way, and what the results were. It first introduces the project and describes its main phases. It then explains the criteria the authors used to determine the types of problems to be covered by the validation data, and how they instructed the chatbot to generate this data. Finally, it summarises the main lessons they learnt from working with the chatbot and some of the challenges they faced in getting it to work properly. The description is accompanied by numerous examples. By engaging with the chatbot in a critical and constructive way, and by establishing close interdisciplinary collaboration with IT specialists, the authors conclude that the key challenge is to demonstrate in practice that humans, not the chatbot, are the masters. In this context, they argue that generative AI language models are not here to replace us, but to help us produce faster and with higher quality to meet our growing and increasingly diverse demands for a better life.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call