Abstract

Various researches are always being carried out to measure the effectiveness of software education. We analyzed previously developed computational thinking tools and studied their practical application and verification methods. Using this information, we developed a 20-item questionnaire to categorize the tools by the abilities they measured: analysis, design, implementation, and reasoning. We surveyed college freshman and 204 students in computer programming subjects in liberal arts and then conducted an exploratory factor analysis to analyze the validity and reliability of our questionnaire test tool. Our test showed that previously used computational testing tools lacked the ability to measure problem-solving processes based on computational thinking. To solve this problem, we revised the questionnaire items to consider the problem-solving process based on computational thinking and proposed a tool that can check the computational thinking through the material of real life using the students’ empirical knowledge. The statistical analysis was as follows: analysis ability (reliability α = .895); design ability (reliability α = .727); implementation ability (reliability α = .745), and reasoning ability (reliability α = .833). To measure computing errors, you need a testing tool that can address real-world problems. We aimed to develop a research tool for measuring computational thinking based on the case of applying and revising existing test tools.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call