Abstract

This study examined the reliability and validity of Web-based portfolio peer assessment. Participants were 72 second-grade students from a senior high school taking a computer course. The results indicated that: 1) there was a lack of consistency across various student raters on a portfolio, or inter-rater reliability; 2) two-thirds of the raters demonstrated inconsistency assessing different portfolios, i.e. inner-rater reliability; 3) peer-assessment scores were not consistent with teacher-assessment scores (criterion-related validity); 4) significant differences were found between peer-assessment scores and end-of-course examination scores, implying that Web-based portfolio peer assessment failed to reflect learning achievements (criterion-related validity). In short, Web-based portfolio peer assessment was not a reliable and valid method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call