Abstract

Although existing lab-based formative evaluation is frequently and effectively applied to improving usability of software user interfaces, it has limitations that have led to the concept of remote usability evaluation. Perhaps the most significant impetus for remote usability evaluation methods is the need for a project team to continue formative evaluation downstream, after deployment.The usual kinds of alpha and beta testing do not qualify as formative usability evaluation because they do not yield detailed data observed during usage and associated closely with specific task performance. Critical incident identification is arguably the single most important source of this kind of data. Consequently, we developed and evaluated a cost-effective remote usability evaluation method, based on real users self-reporting critical incidents encountered in real tasks performed in their normal working environments. Results show that users with only brief training can identify, report, and rate the severity level of their own critical incidents.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.