Abstract

The release of ChatGPT to the public at the end of last year had many in the field of education worried. In response, this paper explored the future of college education and artificial intelligence (AI). First, a proper understanding of how large language models (LLMs) “train” and “learn,” along with their abilities and limitations, was established. Simply put, while LLMs produce plausible linguistic output, they are “stochastic parrots” that have no actual understanding of language.
 Next, we examined the dangers of generative AI and discovered that they might help in the creation and dissemination of misinformation. Even if these AI are not used with malicious intent, the fact that their training data sets are drawn from the internet—which reflects majority thinking—means that they can perpetuate and amplify social inequality and hegemonic stereotypes and biases. On the other hand, if we consider what is missing from the training data, it is only natural that marginalized voices should be even more marginalized. In addition, leaving the issue of the socially vulnerable aside, LLMs can only be trained on digital data, meaning analog data is ignored. This is in line with the idea of “the destruction of history” put forth by Joseph Weizenbaum, an early critic who warned of the dangers of artificial intelligence.
 We then discussed the relationship between humans and machines and considered which relationships were problematic and which were desirable. Researchers in the aviation industry recognized the problem of automation bias from an early date, but this phenomenon can be seen in other areas of society as well. Put simply, if a human places too much trust in a machine, they abdicate their decision-making responsibility to that machine and thus fail to respond quickly to solve any problems that may arise should that machine malfunction. LLMs do not endanger lives in the same way that airplanes do, but a similar bias can be seen with them as well. A more important issue, though, is the fact that people are no longer seen as whole human beings but as computers. This tendency was evident long before the advent of computers, for example in the attempts to quantify human intelligence through IQ tests, but it is a problem we must be particularly wary of in the age of AI.
 Lastly, we considered means for college education to find its way in the present situation. Educators in the US in particular, while dealing with ChatGPT, have pinpointed not the LLMs themselves but the “transactional nature” of education as the problem. That is, they argue that education has long since become less a process of learning and more a transaction in which students receive grades and degrees. Given this transactional environment, it is no wonder that student would rely too much on ChatGPT. This over-reliance, however, comes with side effects: not learning how to think properly, a lack of sufficient academic information, and learning an AI-based writing style. In response, US educators have proposed both “stick” (strategies that make it difficult for students to use LLMs) and “carrot” (strategies that encourage students to learn like human beings, not algorithms) solutions, but the heart of the matter seems to be a sense of responsibility. Creating an educational environment in which students can develop a sense of responsibility for themselves is the path forward for education in the age of AI. If we do this, LLMs can become a useful tool rather than an enemy to fear.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call