Abstract

AbstractCognitive load studies are mostly centered on information on perceived cognitive load. Single-item subjective rating scales are the dominant measurement practice to investigate overall cognitive load. Usually, either invested mental effort or perceived task difficulty is used as an overall cognitive load measure. However, the extent to which the results of these two single-items differ has not yet been sufficiently investigated. Although subjective rating scales are widely used, they are criticized and questioned as their validity is doubted. This study examines construct validity of both cognitive load rating scales (invested mental effort, perceived task difficulty) using relative task difficulty and task demands (cognitive processes and availability of possible answer options) as criteria, adds further evidence supporting the validity of single-item subjective ratings as an indicator for overall cognitive load, and shows how ratings of cognitive load differ when the invested mental effort or the perceived task difficulty item is used. The results indicate that self-ratings might be influenced by the availability of possible answer options as well as cognitive processes necessary to work on a task. The findings also confirm the idea that self-ratings for perceived task difficulty and invested mental effort do not measure the same but different aspects of overall cognitive load. Furthermore, our findings clearly advise to precisely examine at which point and how frequently cognitive load is measured as delayed ratings are closely related to more demanding items within a set of items. Considering advantages of single-item subjective ratings (easy to implement even in huge samples, low time exposure, and suitableness for repeated measures) and disadvantages of alternative ways to measure cognitive load (regarding cost and time efficiency and problem of additional load), current results confirm the use of these items to get an impression of the overall cognitive load. However, the results also suggest that both items do not measure the same thing and researchers should therefore discuss carefully which item they use and how this may limit the results of their study.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.