Abstract

Online education has become an essential part of the modern education system, but keeping the integrity of the online examination remains a challenge. A significant increase in cheating in online examinations (from 29.9% before COVID-19 to 54.7% during COVID-19, as per a recent survey) points out the necessity of online exam proctoring systems. Traditionally, educational institutes utilize different questions in onsite exams: multiple-choice questions (MCQs), analytical questions, descriptive questions, etc. For online exams, form-based exams using MCQs are popular though in disciplines like math, engineering, architecture, art, or other courses, paper and pen tests are typical for proper assessment. In form-based exams, students’ attention is toward display devices, and cheating behavior is identified as the deviation of head and eye gaze direction from the display device. In paper- and pen-based exams, students’ main attention is on the answer script not on the device. Identifying cheating behavior in such exams is not a trivial task since complex body movements need to be observed to identify cheating. Previous research works focused on the deviation of the head and eyes from the screen which is more suited for form-based exams. Most of them are very resource-intensive; along with a webcam, they require additional hardware such as sensors, microphones, and security cameras. In this work, we propose an automated proctoring solution for paper- and pen-based online exams considering specific requirements of pen-and-paper exams. Our approach tracks head and eye orientations and lip movements in each frame and defines the movement as the change of orientation. We relate cheating with frequent coordinated movements of the head, eyes, and lips. We calculate a cheating score indicative of the frequency of movements. A case is marked as a cheating case if the cheating score is higher than the proctor-defined threshold (which may vary depending on the specific requirement of the discipline). The proposed system has five major parts: (1) identification and coordinate extraction of selected facial landmarks using MediaPipe; (2) orientation classification of the head, eye, and lips with K-NN classifier, based on the landmarks; (3) identification of abnormal movements; (4) calculation of a cheating score based on abnormal movement patterns; and (5) a visual representation of students’ behavior to support the proctor for early intervention. Our system is robust since it observes the pattern of movement over a sequence of frames and considers the coordinated movement pattern of the head, eye, and lips rather than considering a single deviation as a cheating behavior which will minimize the false positive cases. Visualization of the student behavior is another strength of our system that enables the human proctor to take preventive measures rather than punishing the student for the final cheating score. We collected video data with the help of 16 student volunteers from the authors’ university who participated in the two well-instructed mock exams: one with cheating and another without cheating. We achieved 100% accuracy in detecting noncheating cases and 87.5% accuracy for cheating cases when the threshold was set to 40.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call