This study investigates the possibility that the characteristics of assessment tools may influence the results of individual competency evaluations. A self-report survey consisting of 28 items was developed to assess core competencies and was compared with the results obtained from K-CESA’s performance-based and self-report assessments. The study involved 117 undergraduate students from S University who participated in the K-CESA assessment. Correlation analysis, Support Vector Machine (SVM), and linear regression models were applied to examine the relationships and predictive accuracy between the self-report survey results and the K-CESA outcomes. The correlation analysis revealed that, even when assessing the same competencies, the relationship between the performance-based evaluation results from K-CESA and the self-report survey results developed in this study was very weak. In contrast, there was a significant correlation between the self-report evaluation results from K-CESA and the self-report survey results from this study, with strong predictive accuracy. This indicates that even with fewer items, the self-report evaluation results from K-CESA can be effectively predicted. These findings highlight the importance of assessment methods in determining competency evaluation results. Furthermore, the study provides practical implications by proposing directions to reduce discrepancies between self-report measures widely used in education and actual performance, thereby improving the validity of competency assessment tools.
Read full abstract- Home
- Search
Year 

Publisher 

Journal 

1
Institution 

Institution Country 

Publication Type 

Field Of Study 

Topics 

Open Access 

Language 

Reset All
Cancel
Year 

Publisher 

Journal 

1
Institution 

Institution Country 

Publication Type 

Field Of Study 

Topics 

Open Access 

Language 

Reset All