Abstract
All Decision Support Systems (DSS) are, by their nature, designed to improve decision making effectiveness, yet a review of the experimental literature reveals that achievement of this objective is mixed. We propose that this is because DSS effectiveness is contingent upon a number of factors related to the task and DSS under investigation. This paper reports a longitudinal experiment designed to evaluate the relationship between DSS effectiveness and two such factors: DSS sophistication and task complexity. In comparison to unaided human judgement, two levels of DSS were evaluated: a deterministic spreadsheet model and a probabilistic model with a graphical risk analysis aid. Our subjects made decisions in a business simulation providing two successive phases of increasing task complexity. Initially, when task complexity was low, we found that neither DSS affected subjects' performance. In the more complex phase, both types of DSS users performed significantly better than unaided subjects. However, risk analysis users performed no better than model-only users. Interestingly, DSS users performed less homogeneously than unaided subjects in the complex phase. DSS users had greater confidence and considered more alternatives than their unaided counterparts. Risk analysis users took longer making decisions in the early stages, while model-only users became the most efficient in the later stages.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.