Abstract

AbstractBackgroundLearning analytics (LA) research often aggregates learning process data to extract measurements indicating constructs of interest. However, the warranty that such aggregation will produce reliable measurements has not been explicitly examined. The reliability evidence of aggregate measurements has rarely been reported, leaving an implicit assumption that such measurements are free of errors.ObjectivesThis study addresses these gaps by investigating the psychometric pros and cons of aggregate measurements.MethodsThis study proposes a framework for aggregating process data, which includes the conditions where aggregation is appropriate, and a guideline for selecting the proper reliability evidence and the computing procedure. We support and demonstrate the framework by analysing undergraduates' academic procrastination and programming proficiency in an introductory computer science course.Results and ConclusionAggregation over a period is acceptable and may improve measurement reliability only if the construct of interest is stable during the period. Otherwise, aggregation may mask meaningful changes in behaviours and should be avoided. While selecting the type of reliability evidence, a critical question is whether process data can be regarded as repeated measurements. Another question is whether the lengths of processes are unequal and individual events are unreliable. If the answer to the second question is no, segmenting each process into a fixed number of bins assists in computing the reliability coefficient.Major TakeawaysThe proposed framework can be a general guideline for aggregating process data in LA research. Researchers should check and report the reliability evidence for aggregate measurements before the ensuing interpretation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call