Abstract

ABSTRACT Automatic detection of prerequisite relations between concepts in education has always been a challenging AI task for researchers. Identification of prerequisite relations enables students to study new subjects more effectively and systematically, while allowing instructors to better tailor their learning materials to students' needs. However, to accurately detect these relations, the AI system must understand the context and meaning behind each concept and how it relates to other concepts in the domain. This requires a deep understanding of the educational curriculum and the ability to analyze large amounts of text and data. Large language models (LLMs) are a recent innovation in AI. LLMs have the capability to understand and generate human-like text since they are trained on a vast amount of text from the internet, books, articles, and more. LLMs can also be fine-tuned to specialize in tasks such as document summarization, question answering, or detecting user sentiment in reviews. Fine-tuning is done with a smaller, task-specific dataset. In this work, we introduce strategies for fine-tuning LLMs to improve their capability to detect prerequisite relations between educational concepts. To the best of our knowledge, this is the first work that utilizes fine-tuned LLMs for prerequisite detection. Our evaluation results demonstrate that fine-tuned LLMs are effective models for prerequisite detection. Our fine-tuning process also facilitates the generation of explanations to shed light on the reasoning behind prerequisite relations. Datasets we generated and used in fine-tuning are made public for the research community. We hope our contributions can aid in organizing and presenting knowledge in education and serve as a foundation for future research in the field.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call