Abstract

Intelligent Tutoring Systems (ITS) are complex programs which are a popular research topic in teaching programs because they demonstrate the highest teaching efficiency. A benefit of ITSs is that you can replicate the same teaching style with the added functionality of an interactive and adaptive teaching program. To understand how such an evaluation of teaching methods and ITSs are done, we visualized and analyzed the process of an ITS research loop. ITSs have to be created by a developer, who makes concrete decisions of the implementation. AI specialists, psychologists, teachers, experts of the teaching domain, knowledge engineers, UI/UX designers and so on are additionally involved. The product is an ITS which will be updated regularly. These updates can have an effect on the efficiency of the teaching methods. But rarely is the evaluation of the ITSs and the selection of teaching methods done. Normally a research paper about an ITS will evaluate a system in one specific state and the people involved in creating new ITSs will be influenced by this research. The problem exists when comparing two ITSs with different underlying implementation or teaching methods. In this case the factors that could have led to changes in the resulting teaching efficiency are multiplied, which are partly a fault of the research loop. Ideally we want the same system with different teaching methods or the same teaching method with different systems. We also analyzed how the ITS research field itself is divided and found 3 research categories. The first category is fundamental research, which specifies the basic knowledge of ITSs. The second is the implementation category — domain specific prototypes of ITSs (e.g. Mathematics or Language). The third, real-use adaptation, improves on existing ideas of large scale ITS use, profitability or usability without much technical knowledge. Making ITS implementations more comparable can be achieved through a homogeneous fundamental specification. On the basis of the knowledge of the ITS research loop and the different subfields in ITS, fundamental research definitions can be made about the underlying learning material. The material is split up in two categories, because each must be handled by an ITS differently. One part is the passive learning content and the other part is the active learning content. These cater either to the presentation of learning material or the testing of the knowledge of the student. A verification of the student is possible through the evaluation of their action in a specific task. A process of the steps for using an ITS is defined, in which the passive learning content is taught before it is tested and the topic is evaluated through the active learning content. An ITS needs to adapt to the student, which requires in-depth understanding. This is possible through the analysis of the student's actions. Verification of the student's knowledge model helps to cement the ITS's assessment of the student. The division of content is beneficial to domain experts because they can understand the implicit nature of how ITSs handle learning content. In contrast, the division will create for developers better modularity and comparison to other modules. In conclusion we analyzed the current state of the process of creation and research, and their linking in the field of ITS. We divided the research field and proposed a division and clear definition of the learning material in the ITS process.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.