Abstract

This study evaluated user behaviors during two different testing environments: no intervention (unfacilitated) versus moderate intervention (facilitated). Nineteen participants evaluated a complex 3-D application with about half in each of the two conditions. All subjects carried out a series of tasks and, for each task, we gathered completion times, ease of use and importance ratings, as well as open-ended comments and suggestions for improvement. At the end of the study, participants made System Usability Scale (SUS) and other ratings and gave overall impressions of the application. There were no differences depending on group (facilitated versus unfacilitated) for any of the quantitative task measures, including: percent success, time to complete the tasks, ease of use ratings, and importance ratings. However, we observed differences in suggestions for improving the application; comments were much richer and more useful for the facilitated group. There were reliable group differences in some of the general measures following task completion, including SUS ratings. These findings are discussed in relation to other observations in the literature and pros and cons of each method are described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.