Abstract

This study investigates how interactive dashboards influence decision making by exploring how specific dashboard features impact design task performance, efficiency, understanding, and confidence. An experiment was conducted in which undergraduate student participants were given a design activity and randomly assigned to one of five dashboards, each using the same underlying functions but varying in the visualization features employed. These features include different graphical representations of the design decision inputs and performance outputs. Participants were first asked to use their assigned dashboard to design a catapult system that maximizes launch distance while meeting requirements related to height, weight, and cost. Following the design task, they were asked a series of questions about their experiences with the dashboard and their understanding of the catapult model. A between-subjects analysis then evaluated how the dashboard design influenced various outcomes of interest. The results show that students who used the most feature-rich dashboard did not perform objectively better than those with the most feature-sparse dashboard, though their self-reported performance was higher. The performance of female versus male participants was also compared, with no significant differences found. The findings support the notion that dashboards should be designed with minimal features to convey the necessary information, and they also point out the disconnect between objective performance and user-assessed performance with interactive dashboards.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call