Abstract

Computer adaptive testing (CAT) enables an individualization of tests and better accuracy of knowledge level determination. In CAT, all test participants receive a uniquely tailored set of questions. The number and the difficulty of the next question depend on whether the respondent’s previous answer was correct or incorrect. In order for CAT to work properly, it needs questions with suitably defined levels of difficulty. In this work, the authors compare the results of questions’ difficulty determination given by experts (teachers) and students. Bachelor students of informatics in their first, second, and third year of studies at Subotica Tech—College of Applied Sciences had to answer 44 programming questions in a test and estimate the difficulty for each of those questions. Analyzing the correct answers shows that the basic programming knowledge, taught in the first year of study, evolves very slowly among senior students. The comparison of estimations on questions difficulty highlights that the senior students have a better understanding of basic programming tasks; thus, their estimation of difficulty approximates to that given by the experts.

Highlights

  • Modern technologies offer numerous possibilities for improving knowledge assessment and the process of education

  • In a class held in an online environment, self-testing via computer-based tests (CBT) provides feedback that shows the students how well they are progressing in the acquisition of knowledge or skills

  • Sci. 2020, 10, 7044 such a one-size-fits-all test, respondents may feel discouraged if the questions are too difficult, or, on the other hand, they may lose interest if the tasks are too easy for their level of knowledge. The solution to this problem may be the application of computer adaptive tests, which have the ability to change the level of question difficulty on the basis of the respondents’ abilities similar to in an oral exam

Read more

Summary

Introduction

Modern technologies offer numerous possibilities for improving knowledge assessment and the process of education. Sci. 2020, 10, 7044 such a one-size-fits-all test, respondents may feel discouraged if the questions are too difficult, or, on the other hand, they may lose interest if the tasks are too easy for their level of knowledge. The solution to this problem may be the application of computer adaptive tests, which have the ability to change the level of question difficulty on the basis of the respondents’ abilities similar to in an oral exam. Many different possibilities (algorithms) for selecting the question can be applied With this adaptive selection, the respondents with less knowledge will be given easier questions, while those better prepared for the test will receive a set of more difficult questions. When a problem does occur, tracing allows the user to see how it came to be: which function, duration of a function, which parameters passed, and how deep into the function the user could delve

Related Works
Research
Data Collection and Analysis
Test Design
Estimation
Comparison thetest
The questioned for which items the students’
Percentage
Results
Comparing Students’ Estimations
10. Correlations
10. Comparing
11. Comparing
Relationship between Student and Teacher Estimations
Discussion
Findings
13. Probability
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call