Abstract

This mini-track is devoted to the topic of Nontraditional Computer Support for Design and User Evaluation. An objective in coordinating this mini-track was to encourage a wide range of papers that touched on the need for additional and nontraditional measures of performance for evaluating the anticipated progress as people use interactive computer systems. The potential benefit of such measurement tools and techniques has been recognized in numerous domains including education, information visualization, digital libraries, and computer supported cooperative work (CSCW). The summaries from several conferences and workshops on human-centered systems contain many references to this subject. However, many questions remain on how and what to measure for interactive information retrieval systems. Currently, if systems are tested, designers must rely on real users to test their systems. While this constitutes an ultimate test it also raises many issues including comparison of skills of users, standardization of test situations, amount of training given, help given, and time allotted for the evaluation. Issues such as these suggest that techniques such as appropriate user modeling might assist in this type of evaluation.While the majority of researchers, agree on the benefit of user interaction metrics, the form of these metrics cause considerable debate. Current performance metrics are mathematically based. However, it is likely that interdisciplinary studies will be needed to define meaningful metrics for human-computer interactions. It remains to be seen whether these metrics can be formulated mathematically. Norman [1] says:?The things we collect statistics about are primarily those things that are easiest to identify and count or measure ? which may have little or no connection with those factors of greatest importance. We must be careful to measure those things that are of value to the humans interacting with our systems.? Interactive systems must deliver what users want, however, the users determine this value. We must determine dimensions that form a basis for human value judgments. We need to understand how humans determine the usefulness of a system and use developed metrics based on this to evaluate and optimize systems. The value of interactive systems needs to be judged within a context. That is, it will need to be understood and established for multiple tasks [2].User-centered design and usability evaluations have become common practices in many organizations, but they are still novel and typical development cycles do not accommodate these practices. Widespread inclusion of usability engineering methods in development should be fostered by empirical studies validating these methods and case studies while addressing cost/benefit issues.Paper contributions to this mini-track range from evaluation by means of empirical studies, field studies or case studies to evaluation by computer simulations. The following methodologies are represented in these papers: (1) use of field research methods, (2) Web-based remote testing, (3) frameworks and methodologies for user interface design and development, (4) novel methods for obtaining user requirements, (5) evaluation of strategies for expert reviews and (6) usability testing and the use of existing data from other domains (sociological, demographic, market analysis; cognitive, educational and social psychology) in product design and evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call