Abstract

This study examined the content validity of the scoring rubric instrument for measuring science teachers’ TPACK and the inter-rater reliability in using the instrument. This research was conducted as part of research and development which has been designed for the development of instruments for measuring teacher knowledge. The analysis carried out was a qualitative analysis based on triangulation of the three validators’ validation results and quantitative analysis for inter-rater reliability based on the Intraclass Correlation Coefficient (ICC) obtained for each question. The validation involved three science education experts from the university to assess the suitability of the scoring rubrics in the technological pedagogical content knowledge (TPACK) framework. Inter-rater reliability examining involved 100 participants who answered 15 questions on the instrument and three experienced raters to assess the participants' answers. The validation results showed that the instrument content was valid for measuring the knowledge tested and had very high inter-rater reliability coefficient for all items The validation results show that qualitatively the contents of the instrument are valid for measuring the knowledge being tested and had an average inter-rater reliability coefficient of 0.94 (very high).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.