Abstract
Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments in a way that fails to develop their philosophical abilities. We argue that this problem exists even if students can produce better papers with AI and even if instructors can detect AI-generated content with decent reliability. But LLMs also create a pedagogical opportunity. We propose that instructors shift the emphasis of their assignments from philosophy papers to “LLM dialogues”: philosophical conversations between the student and an LLM. We describe our experience with using these types of assignments over the past several semesters. We argue that, far from undermining quality philosophical instruction, LLMs allow us to teach philosophy more effectively than was possible before.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.