Karahoca et al. [1] addressed an important topic in their paperconcerning the use of usability evaluation methods to choose anappropriate software prototype for tablet personal computers.They applied a combination of usability evaluation methods toevaluate the usability of two software prototypes with differentgraphical user interfaces (GUIs); iconic and non-iconic. These GUIswere designed to replace the paper-based forms at an emergencydepartment of a Turkish hospital. The whole healthcare staff ofthe department, consisting of 6 physicians and 32 nurses, partici-patedas evaluatorsin the study.The findingsof the comprehensiveevaluation showed that the iconic GUI prototype had better usabil-ity thanthe non-iconicGUI prototype.This studycontributesto thebody of knowledge concerning the usability of GUIs. Such studiesare important from a practical perspective because in the compet-itive market of clinical software they help health care organiza-tions in selecting systems that best suits their users’ needs.Moreover, the results of these studies provide practical input tosoftware developers concerning the design of software that is easyto use and that fits the workflow of healthcare providers.Although Karahoca and his colleagues applied a comprehensivemethod and obtained interesting findings, we would like to drawattention to some methodological issues concerning the way theusability evaluation methods were employed in this study.The authors mention that Hom [2] identifies three types ofusability evaluation methods, which include testing, inspectionand inquiry and state that they applied heuristic evaluation andcognitive walkthrough (CW), both expert inspection methods, toevaluate the usability of the two prototypes.1. Recruitment of evaluators for heuristic evaluationThe authors recruited potential users of the prototypes as heu-ristic evaluators. Based on a computer literacy test, half of theseusers were classified as novice users. In a study identifying the fac-tors affecting heuristic expertise and levels of expertise permissi-ble to conduct a heuristic evaluation, according to Kirmani [3]three factors (usability experience, experience with the heuristicevaluation, and heuristic training) significantly affect the outcomesof heuristic evaluation. This study showed that domain expertisedoes not have a large impact on the outcomes. Nielsen [4] likewiseshowed that the performance of novice heuristic evaluators, hav-ing general computer knowledge but no special usability expertise,was fairly poor compared to the performance of evaluators withusability expertise. Novice evaluators must first become knowl-edgeable of and proficient in applying heuristics [5]. Therefore,the validity of the heuristic evaluation results of Karahoca et al.using ‘‘usability novices” of which half was computer illiteratecan be disputed.2. Recruitment of evaluators for cognitive walkthroughCW is a usability inspection method that evaluates the easewith which a typical new user can successfully learn to performa task using a given interface design. As also stressed by Hom, inCW either usability specialists or software developers shouldexamine the behavior of the interface [6]. In the study performedby the authors CW was again carried out by end users and notby usability experts. In usability inspection methods such as CW,experts evaluate a user interface without involving users. This isin contrast to usability testing where evaluators let users workwith the system while recording the user sessions for later analysisof usability problems.The approach that the authors followed to assess the learnabil-ity of the prototypes by real users is not in agreement with the CWmethod. In CW, inspectors should know the interface before apply-ing the method and then speculate about the ease with which anovice user can learn how to use the system taking user back-ground knowledge such as computer literacy into account.3. Usability problemsThis usability evaluation study lacks the detection of usabilityproblems,whichisthemaingoalofeveryusabilityevaluationstudy[4,7].Thislackcanaffecttheresultsofthestudyinseveralways.(A)Comparing the effectiveness of the prototypes based on scenariocompletion rates and completion time, without a careful reviewand analysis of the main usability problems that potentially canaffectuserinteractionandtaskoutcomes,doesnotseemvalid.Users,for example, could have completed a scenario in a shorter time byskipping some none mandatory steps hindering them during the
Read full abstract