Abstract

For the past decade, testing the usability of print software manuals has become a mature area of study, characterized by a wide range of qualitative and quantitative methods. Some of the most common methods include field observations, surveys, interviews, protocol analyses, focus groups, iterative testing, and quasi-experimental lab simulations [1]. Such diverse approaches to usability testing offer an opportunity for complementary inquiries and analyses. For example, findings from focus groups can provide key questions for experimental researchers to pursue in greater depth and with greater possibility for generalizability. Essentially, this complementary approach envisions an interaction between the academy, with its propensity toward pure, experimental research, and industry, with its more applied approaches for alpha and beta testing. Patricia Wright, a specialist in usability studies, has long argued that integrating pure and applied research is the best means for expanding our knowledge about effective document design [2; 3]. Such integration reveals both the immediately applicable aspects of effective manuals and the more theoretical boundaries in textual features that make a difference for general types of tasks, readers, and contexts of use. In order to realize the potential of conducting a conversation between pure and applied research, documentation researchers and practitioners must clearly understand the limitations that exist in the conclusions that investigators derive from specific methods of inquiry. In this article, I look solely at experimental usability tests that rely on quantitative methods of analysis. I analyze the ways in which the research designs and questions of the past ten years of experimental studies affect the strength of cumulative conclusions and the confidence we can have in those conclusions. My purpose is not to give preference to experimental research as the most important approach to usability testing. Far from it. Rather my critical review has two purposes: (1) to facilitate the dialogue between academic and industrial researchers by identifying the limits of current experimental findings; and (2) to propose research agendas and designs for future experimental usability tests that can strengthen the conclusions that such researchers offer for practical consideration. My evaluation of ten years of experimental usability studies shows that many of the conclusions of these studies are not strong enough to serve as valid, generalizable, and replicable foundations for subsequent research, be it pure or applied. These conclusions can be strengthened by designing studies that pay more attention to the sequencing and integration of related investigations and that institute better controls for sample selection, size, and composition. This article discusses my overall findings, the details of which I will develop more fully in my presentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.