Abstract

From predictive medicine to autonomous driving, advances in Artificial Intelligence (AI) promise to improve people’s lives and improve society. As systems that utilize these advancesincreasingly migrated from research labs into the real world, new challenges emerged. For example, when and how should predictive models fit into physicians’ decision-makingworkflow such that the predictions impact them appropriately? These are challenges of translation: translating AI systems from systems that demonstrate remarkable technological achievements into real-world, socio-technical systems that serve human ends. My research focuses on this critical translation; on the user experience (UX) design of AI systems. The prevalence of AI suggests that the UX design community has effective design methods and tools to excel in this translation. While this is true in many cases, some challenges persist. For example, designers struggle with accounting for AI systems’ unpredictable errors, and these errors damage UX and even lead to undesirable societal impacts. UX designersroutinely grapple with technologies’ unanticipated technical or human failures, with a focus on mitigating technologies unintended consequences. What makes AI different fromother interactive technologies? – A critical first step in systematically addressing the UX design challenges of AI systems is to articulate what makes these systems so difficult to design in the first place. This dissertation delineates whether, when, and how UX of AI systems is uniquely difficult to design. I synthesize prior UX and AI research, my own experience designing human-AI interactions, my studies of experienced AI innovation teams in the industry, and my observations from teaching human-AI interaction. I trace the nebulous UX design challenges of AI back to just two root challenges: uncertainty around AI systems’ capabilities and the complexity of what systems might output. I present a framework that unravels their effects on design processes; namely AI systems’ “design complexity framework”. Using the framework,I identify four levels of AI systems. On each level, designers are likely to encounter a different subset of design challenges: Current design methods are most effective in eliciting, addressing, and evaluating the UX issues of Level 1 systems (probabilistic systems, systems with known capability with few possible outputs); Current methods are least effective for Level 4 systems (evolving, adaptive systems, systems that can learn from new data postdeployment and can produce complex outputs that resist abstraction or simulation). Level 2 and 3 are two intermediate levels. I further demonstrate the usefulness of this framework for UX research and practice through two case studies. In both cases, I engaged stakeholders in their real-world contexts and addressed a critical challenge in fitting cutting-edge AI systems into people’s everydaylives. The first is the design of a clinical decision-support system that can effectively collaborate with doctors in making life-and-death treatment decisions. It exemplifies Level 1 systems. The second project is an investigation of how Natural Language Generation systems might seamlessly serve the authors’ communicative intent. This illustrates Level 4 systems. It reveals the limits of UX design methods and processes widely in use today. By teasing apart the challenges of routine UX design and those distinctively needed for AI systems, the framework helps UX researchers and design tool makers to address AI systems’design challenges in a targeted fashion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call