Abstract

Since the introduction of the Force Concept Inventory (FCI) in 1992, the CI tests have been widely used for measuring conceptual knowledge and for studying teaching issues in almost all disciplines and levels of study. A standard concept inventory analysis includes the design of a qualitative test, adequate realization of testing, calibration procedure, and comprehensive analysis of its findings. Usually, the CI test calibration is carried out through the Rasch sociometric technique, which is also used for calculating crucial indicators of knowledge such as item difficulties, students’ abilities, and many more. Whereas the quality of the tests’ design can be guaranteed by using certified and professional CI tests, the statistical adequacy of the testing merits critical attention before going on to the final step of the analysis. Also, the analysis of CI outcomes can be advanced by contemplating auxiliary tools and complementary techniques. In this framework, we propose to enforce the test index validity requirement for qualifying the CI outcomes as local or global. Specifically, the conclusions of CI analysis are acceptable for the whole population from which the sample has been extracted if the test's indexes comply with the validity requirements provided by the index theory. In the case when test indexes are out of validity range and re-conducting them is impractical for some objective circumstances or research specifics, we suggest injecting some new records into the existing one or mixing the data gathered from different samples until the new indexes are in the desired range. Using this methodology, we have reviewed our previous FCI tests, which were initially intended to demonstrate the impairment of learning in the physics discipline triggered by online learning during the pandemic closure. Through this renormalization procedure, we obtained a credible assessment of the understanding of mechanics and electromagnetism in high school students who followed online lectures during the pandemic closure. Also, by using indexes’ validity as an auxiliary tool, we identified that for measuring the knowledge of electromagnetism in students enrolled in branches where physics is a basic discipline, a shortened version of the BEMA test was a better instrument than the corresponding shortened EMCI test. Next, we used the optimal histogram idea borrowed from distribution fitting procedures to identify the natural levels of students’ abilities for solving a certain CI test. Another intriguing proposal presented in this work consists of combining an ad-hoc Likert scale assignment for usual errors in physics exams with the FCI designation of the basic commonsense confusion in mechanics for identifying their pairing features in common exams. We believe that the methods proposed herein can improve CI analysis in more general senses. Doi: 10.28991/HEF-2023-04-01-08 Full Text: PDF

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call