Abstract

The study presents the AVALANCHE visualization test-bed for sensemaking in ill-structured problem domains. AVALANCHE allows the users to develop and frame hypotheses, analyze the hypotheses in the experimental domain, and provide cases for simulation experiments. The visualization and sensemaking support module in AVALANCHE provides human–computer interface and visualization supports. Validation experiments using groups aided with visualization and support tools and groups with no aiding was performed on two open-ended sensemaking cases provided by a military subject matter expert. Statistical analyses revealed mean performance differences in plan accuracy, plan time, and number of cue prompts between aided and unaided groups across task scenarios. In general, the aided group had the highest mean plan outcome accuracy, low planning time, and the least number of prompts. The intention is to extend the study to collaborative sensemaking tasks to address the effects of negotiation on team planning time, cue prompting frequencies, and different types of cue prompting modalities.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.