Abstract

In Jordanian higher education institutions, a competency exam was developed to ensure that students had the ability to attain particular competence levels. The results of the competency examination are one of the measures used as key performance indicators (KPIs) evaluating the quality of academic programs and universities. There are numerous evaluation methods for students’ performances based on the academic achievement of the pupils, including the application of conventional statistical approaches and machine learning. The objective of this paper is to develop a framework to help decision-makers and universities evaluate academic programs using ML by identifying programs and learning outcomes that need to be established by analyzing competency exam data. The developed framework can also reduce exam costs by substituting machine learning algorithms for the actual execution of the exam. We have created a dataset that can assist academics with their study; the dataset includes demographic and academic data about students, such as their gender, average university degree, type of university, and outcomes on the competency exam based on their level and competencies. Experiments supported the claim that models trained using samples from the student sub-dataset outperform models constructed using samples from the entire dataset. In addition, the experiments demonstrated that ML algorithms are an effective tool for recognizing patterns in student performance. Experiments demonstrated that no single ML model outperforms other ML models. However, the MLP model generates more accurate models, making them more beneficial for developing robust frameworks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call