Designing the User Experience of Machine Learning Systems

Part of the AAAI Spring Symposium Series

March 27–29, 2017
Palo Alto, CA

Organizing committee

Mike Kuniavsky, PARC
Elizabeth Churchill, Google
Molly Wright Steenson, Carnegie Mellon University

Symposium Summary


Designing the User Experience of Machine Learning Systems was an AAAI Symposium held at Stanford University, Stanford, California from March 27–29, 2017. The symposium brought together experts from a variety of disciplines, and from a variety of roles and backgrounds. Discussion centered along a number of themes: the need for better tools for exploration, experimentation and simulation around inter-system cooperation and conflict resolution; better models and practices around data cleaning, curation, management, and provenance auditing; and recommendations for how to foster deeper collaboration.


Machine learning is just one subarea of the broader Artificial Intelligence toolkit, but it is the one that has caught much public imagination of late. This symposium brought together a multifaceted group to explore what machine learning means for user experience: what challenges do we face in creating desirable, useful, usable, and reliable user experiences that incorporate machine learning techniques? We invited participants to consider issues that lie at the intersection of machine learning and user experience. Questions included how to face the challenges of machine-learning, such as what application and domain-specific challenges exist for experts who work with machine learning and predictive modeling? and observations from those who study the effects of such systems and services on people and their practices, and, ultimately, on social structures.

The symposium participants were as diverse as the topics we covered, hailing from industry and academia, from social and technical sciences, and from design. Attendees hailed from Artificial Intelligence’s many subfields to engineering; computer science; HCI; interaction, UX and product design; and sociology and anthropology. They were at different career stages and in a range of roles and levels in their various organizations.

The first day centered on paper presentations and discussion, the second day on participatory activities: a workshop exploring assumptions, fears and hopes for machine learning, a panel with industry experts from Nissan, Ford, Renault on autonomous vehicles, and a demo session that allowed participants to gain hands-on experience with some of the systems discussed in presentations. The third half-day was a session for discussion, reflection and planning for next steps.

Papers presented addressed the impact of machine learning on a range of topics from the philosophical to the critical, to the practical:


Communication and Collaboration

We addressed how to develop tools that support communication and collaboration between system, interaction, product and service designers. How might we support a more productive dialog between those who apply machine learning techniques, and those who understand the implications of the choices that developers and designers make in the design of these systems? We looked at getting beyond the black box—enabling better experiments in model training, and in tending and pruning data.

Reciprocal knowledge sharing will move both areas forward, and will enable us to create more trusted and trustworthy user experiences. Bringing relevant and inclusive case studies that reflect a range of diverse use cases is one way for better formulation of design opportunities.

Automation, Agency and Control

Spurred in part by the panel on autonomous vehicles, we discussed the difficulty of designing for complex ecosystems that are multi-device, multi-service, and interconnected—or sometimes disconnected? They all utilize their own forms of learning and predictive modeling, making for considerable design and user experience complexity, and need to work between technical, physical, and social layers.

Bias, Trust, and Power

We also tackled more philosophical and political issues. On the second day of the symposium, we discussed system transparency, with a call for clear provenance models that make explicit the potential biases in machine learning data sets, sources, and interactions. The basic call was to always provide multiple points of view to mitigate issues to do with bias, and to make bias an explicit topic of investigation itself.

Trust and power were key issues closing the symposium: What dialog should systems have with their users, and what does it mean for systems to be “personable”, to have “character”, and to be “socially responsible”?

The symposium ended with a pledge to craft a summary monograph that will be published to augment the publications in the AAAI 2017 Spring Symposium Technical Report.

Mike Kuniavsky, Elizabeth F. Churchill, and Molly Wright Steenson served as co-chairs for this symposium. The papers that were submitted to the symposium were published as AAAI Press Technical Report SS-17-01.

Report authors

Elizabeth F. Churchill is a Director of User Experience at Google, based in San Francisco and Mountain View, USA. Email:

Molly Wright Steenson is an Associate Professor at Carnegie Mellon University in the School of Design in Pittsburgh, PA USA. Email:


Original Abstract, Topics and Schedule

Original abstract

Consumer-facing predictive systems paint a seductive picture: espresso machines that start brewing just as you think it’s a good time for coffee; office lights that dim when it’s sunny and office workers don’t need them; just in time diaper delivery. The value proposition is of a better user experience, but how will that experience actually be delivered when the systems involved regularly behave in unpredictable, often inscrutable, ways? Past machine learning systems in predictive maintenance and finance were designed by and for specialists, while recommender systems suggested, but rarely acted autonomously. Semi-autonomous machine learning-driven predictive systems are now in consumer-facing domains from smart homes to self-driving vehicles. Such systems aim to do everything from keeping plants healthy and homes safe to “nudging” people to change their behavior. However, despite all the promise of a better user experience there’s been little formal discussion about how design of such learning, adaptive, predictive systems will actually deliver.

This symposium aims to bridge the worlds of user experience design, service design, HCI, HRI and AI to discuss common challenges, identify key constituencies, and compare approaches to designing such systems.


Symposium schedule

Monday, March 27

Tuesday, March 28

Wednesday, March 29


Mike Kuniavsky
Palo Alto Research Center
3333 Coyote Hill Road
Palo Alto, CA 94304
+1 650 812 4847