Abstract

Surveys that require users to evaluate or make judgments about information systems and their effect on specific work activities can produce misleading results if respondents do not interpret or answer questions in the ways intended by the researcher. This paper provides a framework for understanding both the cognitive activities and the errors and biases in judgment that can result when users are asked to categorize a system, explain its effects, or predict their own future actions and preferences with respect to use of a system. Specific suggestions are offered for wording survey questions and response categories so as to elicit more precise and reliable responses. In addition, possible sources of systematic bias are discussed, using examples drawn from published IS research. Recommendations are made for further research aimed at better understanding how and to what extent judgment biases could affect the results of IS surveys.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.