Abstract

The purpose of the present study was to evaluate the effect of evaluator intervention, task structure, and user experience on the user's subjective evaluation of software usability. The study employed a 2 − 2 − 2 factorial between-subjects design with two levels of Evaluator Intervention (Intervention vs. Non-Intervention), 2 levels of Task Structure (Guided-Exploration [free-form] vs. Standard Laboratory), and 2 levels of User Experience (Novice, Experienced). The users were asked to learn to use and then subjectively evaluate a restricted subset of 12 common word processing features over four hours of participation. Day 1 was a training day and Day 2 was a test day. The major finding was that the user's subjective impression of the software was affected by both user Experience and evaluator Intervention. For difficult to use word processing features, experienced users rated the features as more difficult to use under the intervention than non-intervention condition. For novice users, this difference was in the opposite direction but not significant. The same pattern of results was obtained for the subjective rating of ease of learning, overall evaluation of the software, and confidence in ability to use the software. These results were interpreted within context of attribution theory. The effect of structure, although not as prevalent, interacted with user experience in the evaluation of screen features and system capabilities. The relative lack of task structure effects was attributed to the difficulty in implementing free form learning and the number of problems encountered in use of the software under Guided Exploration which counteracted any of its benefits.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call