Visualizing Artificial Intelligence

 One of many iterations of the training graph, which describes how an AI model is learning and provides information on key performance metrics

One of many iterations of the training graph, which describes how an AI model is learning and provides information on key performance metrics

 

Project Background

Bonsai is an artificial intelligence platform company that creates tools for software developers, data scientists, and engineers to build their own AI models. Most users are new to AI and are not familiar with the terminology or complex mathematics that underly AI.  When users use Bonsai’s platform to build their models, they need insight into how their AI is learning. What’s the best way to convey this information, especially for users who are working with AI for the first time?

Role
UX design and research lead

Team
Visual designer, product manager, and two software engineers

Timeline
One month for initial research, design, testing, and iteration, with multiple subsequent passes

Initial Research

My team needed guidance on what to visualize (and if visualizations were even appropriate), so I invited AI researchers to help us get started. I spoke researchers on Bonsai’s AI team and external experts to understand how they trained models and what sorts of visualizations they used. Together, we discussed the competitive landscape, and I also led a series of sketching sessions and design studios with real data to develop prototypes for data visualizations we could use in our product.

 AI researchers sketched concepts and created prototypes with real data in Excel and Python as part of a data visualization design studio

AI researchers sketched concepts and created prototypes with real data in Excel and Python as part of a data visualization design studio

Working with real data was key––we needed to know what was possible with the kinds of data users would be working with, and it also helped us create realistic prototypes we could put in front of users.

Low-Fidelity Designs & Concept Validation

Now we had prototypes that suited AI experts, but how would they help users new to AI? With guidance from the AI team, we wrote explanatory copy and added some additional information to help naive users understand what they were looking at.

I put these low-fidelity designs in front of software engineers without much exposure to AI to see how they would interpret them. The focus was on perceived value and usefulness, rather than usability: do these designs make sense? Are they useful? Can users understand why they are looking at this visualization and what they would do with this information? I wanted to assess usefulness to ensure that we were proceeding in the right direction.

Generally speaking, the users could tell us at a high level what the visualization was supposed to convey, but they weren’t sure what to do with this information.

We iterated and tested a couple more times with low and mid-fidelity designs before developing an approach that performed better.