Abstract

PurposeThis study extends prior accounting research on decision aids (DAs) relating to face validity. Specifically, this study aims to examine the effects of face validity through the presence of two levels of bias in DA output. The presence of bias in a DA will not affect how statistically informative an aid is but will decrease the face validity. The findings suggest that non-expert DA users recognize the bias in the DA’s suggestions as evidenced by users’ low agreement with the aid; however, they do not adjust for the bias in their performance, suggesting that non-expert users do not learn from the DA.Design/methodology/approachThis repeated-measures experimental design allows us to examine performance effects over time in response to different levels of bias in the DA output. The participants in the study are provided with outcome feedback to examine learning effects.FindingsThe findings suggest that non-expert DA users recognize the bias in the DA’s suggestions as evidenced by users’ low agreement with the aid; however, they do not adjust for the bias in their performance, suggesting that non-expert users do not learn from the DA. Although users of an unbiased DA strongly agree with the DA’s output, individual performance deteriorates over time. Initially, the users of an unbiased DA perform better than those who use a biased DA; however, over time, the performance of users of an unbiased aid deteriorates and the performance of users of the biased aid does not improve.Practical implicationsCompanies developing DAs may need to consider the effects of using a DA under circumstances different from those under which the aid was developed and that may lead to the biased DA output. This study has implications for firms that design, develop and use DAs.Originality/valueThis study considers a yet unexamined face validity issue – observable bias in DA output. This study examines deterministic DAs designed to assist the decision-maker through their ability to combine multiple cues in a systematic and consistent manner. This study has implications for firms that design, develop and use DAs. Firms need to consider the effects of using a DA under circumstances different from those under which the aid is developed, thereby, potentially leading to biased DA output. Each additional variable added to the DA will be associated with an incremental cost in a DA’s development, use and modification. The results of this study provide insights contributing to the information available for cost–benefit analyses conducted when developing a DA or when considering the modification of existing aid. Failure to change a DA because of face validity issues alone may result in a decline in user performance. Thus, the cost of modifying a DA must be weighed against the benefits resulting from improved performance. This study contributes insights into how users’ responses to DA bias could affect the assessments of the benefits of including an omitted variable in a DA.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.