Design and AI: Where do we start?

How AI is shaping human experiences and what the work of designers look like in an AI-first future.

Satsuko VanAntwerp
Rat's Nest

--

Toronto is poised to become a hot spot for the fast-growing field of AI. While most of the current practitioner level conversation is engineering-centric, we’re interested in the intersection of AI and design. How is AI shaping human experiences? What does the work of designers look like in an AI-first future?

In order to tackle these and other questions, we created a new meetup called: design+AI. Design+AI brings together an early community of practitioners and thought leaders to share our state of knowledge with the goal of learning together, faster; preempting disciplinary isolation; and, creating meaningful communication across disciplines.

The inaugural Design+AI meetup was held on April 28. Here is a distilled version of the conversation:

What is AI? AI is a loaded term. Machine learning and deep learning are more precise. We continue to develop machines that can do more and keep moving the bar on our definition of AI, making it difficult to pin down. And anyway, intelligence is a construct. Another way to think of AI is that it is an output of Machine Learning. The strength of AI, or level of its “intelligence,” is on a spectrum.

AI challenges our sense of control and self-importance. We assume that AI is smarter than us, that it’s superhuman, that given time it will become independant from its maker. But AI is an artificial alien. It is a machine that is purpose built. This assumption of hostile takeover has more to do with our fear of change and our own hubris around our sense of importance on the evolutionary ladder than with a true threat.

AI forces us to self-reflect, and that’s both scary and difficult. Humans are biased, and yet humans are behind AI training data. Because AI is a reflection of its training, AI can shine a light on the uncomfortable truth about the biases in our society (as we saw in the New Zealand and Paroleexamples). People often say “I have nothing to hide” and forgo their privacy. But AI looks for correlation, not causation; and gaps and biases in training data can result in unfair discrimination. For example, your facebook friends could hamper your ability to get a loan. As designers, it is our responsibility to prototype and test for these gaps, so that we don’t perpetuate or amplify systemic inequalities in society.

It will take time to acclimatize to AI. We’ve gotten used to all kinds of game changing technologies. Airplanes, cars, walk lights, skyscrapers, iphones, to name a few. Now, we use these things without thinking twice. We can test AI so that it’s reasonably safe. For example, self driving cars are likely better drivers than any of us, given lidar sensors, 360° vision, and night vision. Over time, we will get used to AI, too.

Transparency is a nice idea but not practical and sometimes irrelevant. We ought to have access to the rationale behind life altering decisions; that’s the thinking behind EU’s recent algorithmic decision making regulations. However, in reality these laws are shallow. Upon request, AI owners are only required to release a decision flow chart rather than a deep analysis of how a decision was made. That’s because, behind the curtain, there are thousands of variables (even millions with deep neural nets), each assigned a weight, that go into an AI output. They are “black boxes”; even if you look at all the math, there is too much data to wade through to understand what is going on inside of them. And anyway, in most cases we don’t have access to what’s behind the curtain. AI is often owned by companies who are not required to share their algorithms and not eager to give up IP. Furthermore, in the case of racial profiling, understanding how AI (or a human) arrived at a racist output is irrelevant — regardless of why, this kind of prejudice is unacceptable. Because we can’t understand all the layers of AI, we need early prototyping and testing more than ever before.

First came automation, now augmentation, then…? We’re far from Terminator or sentient AI. In the meantime, AI can augment our abilities to do our work better. For example, judges could leverage AI to look through billions of civil code and parallel judgements to support them in making a more objective verdict. And even trainers and testers of AI reap benefits. AlphaGO’s trainer realized that his own playing ability improved during the process of training and testing the AI. But what do augmented experiences look and feel like to the person on the receiving and usage side? How do we design ethical, meaningful, useful, thoughtful interactions especially as we inch from augmentation to agency?

AI makes ethics murkier. The Trolley Problem suddenly has real world implications thanks to self-driving cars. And, AI can predict which patients will develop mental health conditions, like schizophrenia, down the road. But is this helpful or harmful information? Designers, in collaboration with multidisciplinary teams, will need to unpack these and other messy questions. In doing so, designers must step back and continue to drill down on intent: What kind of world do we want to live in and how can machine learning and deep learning help us get there?

How do you see AI impacting and changing the practice of design?

chatting among practitioners prior to the event getting started

This blog post is the summary of a rich group discussion at the design+AI meetup, attended by a small group of professionals and practitioners: Alyssa Kuhnert, Andrea Ong, Andy MacDonald, Angus MacPherson, Benjamin Visser, Charles Finley, Cheryl Li, Craig Saila, Eric Lee, Genco Cebecioglu, Jay Vidyarthi, Jon Tirmandi, Kei Turner, Kimberley Peter, Lindsay Ellerby, Mattew Kantor, Matthew Milan, Nora Young, Scott Wright, Spencer Alderson, Vance Lockton, Vince Wong, Ragavan Thurairatnam and myself.

This was the first of a new meetup among Toronto-based design and technology leaders, to chat broadly about designing in the world of AI. While the event is currently invite-only, we will be opening it up to a larger group in the coming months. In the meantime, please do share any comments or questions in the comments section, or get in touch via hello@normative.com.

--

--

Satsuko VanAntwerp
Rat's Nest

User Researcher & Strategist • building human-centred AI / half Japanese half Dutch / MBA / into: explainable ai, tech ethics, behaviour Δ. hybridity.xyz