I think that the next new interaction experience is going to be discovering that we still haven’t gotten the old ones right. I think that the next big interaction experience is going to be…I don’t know. I’m not a prognosticator. I think where I come from is not thinking about the technology and what people are going to invent. What I think about is how people relate to it. People relate to technology exactly the same way that they’ve always related to technology, and it’s exactly the same way that people have always related to all artifacts of technology.
It’s the same way they related to steam engines, you know? People were scared of them and they anthropomorphized them. They underestimated them and then they overestimated them and then they underestimated them and then they regulated them, but it was too late and then they overregulated them for too long, and all of that is going to happen again.
Everybody knows it’s really hard to communicate with computers, and some think it’s going to get easier to communicate with computers by having natural language interpretation in the loop. People are going to discover that when you inject uncertainty into the man-machine dialogue through natural language processing, it can become harder and more obscure and more problematic. There are a half-dozen popular, state-of-the-art video conferencing programs out there, and yet they all demand lots of complicated work to get them going. Just this morning, I’m sitting here with experts in software, and they ultimately could not get the video conferencing software to work.
I just simply don’t believe that the issue is what’s the next technology. I believe the issue is that we have yet to master the technology that’s on deck today, and a lot of that has to do with the fact that we don’t know what we do in this business. We think we’re about technology, and we tend to think of technology as a rutter, but it’s more like a motor. It will push us, but in what direction? The last thing that we want is for the technology to choose the direction.
Well, I don’t think that the significant thing is the knowledge. I think that’s in the past. In the educational system that I grew up in, it was one of getting filled with knowledge, and I just don’t think that’s the issue anymore. I mean nobody really needs to be filled with knowledge because we all have all of the knowledge in the world in our pockets.
So, it’s a matter of getting critical thinking skills and understanding context, which means history. The knowledge you need to learn is not about technology, the knowledge that you need to learn is getting perspective on how technology fits into human history. It’s about learning how to evaluate that, so that when someone says to you “Hey! Here’s this really cool thing, I want you to tell me all of your favorite TV shows,” and it seems like a cool, fun social media game, but what it’s really doing is helping to triangulate your personality so that you can be targeted for marketing purposes, particularly for political advertising. People often don’t know that, and what they need is the critical thinking capabilities to understand that when somebody on social media is asking them a cool question about their behavior, it’s a lot like somebody wearing a ski mask walking up to you and asking which pocket you keep your wallet in.
When I started to think about this in the late 80’s and early 90’s, everybody knew that we had to make software user-friendly, and everybody knew that it was about being human-centered and focusing on the user, but nobody really knew what that looked like. Knowing that you need to be user-centered doesn’t really help you to know where the user’s center is, you know? I look at things as a toolmaker. I ain’t an inventor, but I think of myself as a toolmaker—that’s the thing that I do.
So the first question is: What does user-friendly or user-centered design looks like? And then you begin asking yourself questions like, well, how would you draw a line from where you are now to being user-centered in the future? What would that look like, and where would it go? And it became clear to me that you don’t get there by looking at what people do, but by looking at what motivates people, what drives them. Most people don’t want to sit in front of the computer and wrangle with software. What they want is some kind of end result, so you find out what that end result is.
I realized that you really need to understand people to do that, and I was having a hard time communicating what people wanted to developers. As soon as I talked about what users wanted or what the developers thought users wanted, they kept talking about all the flexibility and capability of the software they were building. It became clear to me that when you can do anything for anyone, you don’t do anything well for anyone. You have to start out by picking one person and saying “What would it take to make this one person happy?” And then asking yourself what the difference is between that one person and this other person, and you work from there.
Whenever you’re thinking about people as a mass, you’re never getting specific because some people might want one thing and other people might want something else, and so all software ends up being like a stack of 2×4’s and a hammer and saws and it’s like “Yeah, build the house you want.” But this isn’t what most people want. So instead of starting with the tools for a “perfect” house, what if you were to start off by asking what tools make an adequate house that would make the majority of people happy? But it turns out that that surfaces a whole other problem; as soon as you start talking about a majority of people or people in groups, you immediately go back to “Well, we don’t know what they want,” and so we go back to giving them the stack of 2×4’s and the hammer and the saw. You just get into this circular reasoning, and this was very characteristic of how software was developed universally back then and how it’s commonly developed today. So we try to build software for everybody that does everything, and we ask ourselves what might anybody want and give that to them and that’s just the path to shit. Yes, I want to make everybody happy, but to make everybody happy, you need to begin by making just one person happy. And that was a fundamental insight.
So I was working for a client at the time, and I created a representative archetype, a person who I made up and gave a name to, and her name was Cynthia. Cynthia was very realistic, as she was based on field research that I had done where I went out and interviewed users. So when I described Cynthia to the development staff, they all recognized her. Even though she didn’t exist, they all said “Yeah, I get Cynthia. I’ve met 100 people like her.” and then I was able to say “This is what Cynthia wants, and she doesn’t want that other stuff.” And all of a sudden I had a knife that I could begin to cut the “somebody might want” stuff out of the equation and was able to make the case that if we could make Cynthia happy, we could make a whole bunch of other people happy. And the developers looked at that and said “Yes! We understand that.”
This was an antidote to “our job is to make everybody happy.” If you try to make everybody happy, you don’t make anybody happy. But, if you try to make Cynthia happy, you can make a whole big group of people happy. And then you can look at the people that aren’t in that group of happy Cynthia-like people and say “Well, what’s the difference between them and Cynthia?” And that’s a tactical question that you can find an answer to, and then you can figure out how to make the subset group of people happy using our product. This is how personas came into being. It was a matter of creating a tool for focusing your thinking away from the amorphous, always-moving, ever-changing group of “everybody” so that you’re not trying to create a solution for everybody, which doesn’t work—we’ve proven that over and over and over again.
So you say “personas and other great stuff”—thank you for that compliment, I appreciate that. The way you come up with “other great stuff” is by asking “What are we actually doing here? What are we trying to do?” A lot of people think they’re trying to make money, or they’re trying to come up with cool technology. The money and the cool technology are byproducts of creating solutions that empower real people, and so the video-conferencing software that I was using today comes from a company that has a lot of money and probably hopes to make a lot of money, and they use really cool technology. And their software is garbage, and I hate it, and I hate the company. And I was using very similar software from a very similar company yesterday, and it gave me a very similar class of problems. Different, but similar — and I hate it, and I hate the company that tries to sell it to me. What none of these companies are asking is who their users are or what they’re trying to do. Instead, they’re saying “Who are we? We’re the world’s greatest video-conferencing technology around, and we’ve got cooler images and better graphics and better technology and our stuff is desired by a broader range of people,” all of which is either irrelevant or a lie.
The way you come up with neat stuff is not by trying to come up with neat stuff, but by trying to understand who real people are and what they’re really trying to accomplish, and then getting them there, and not thinking about the corporate organization it takes or the cool technology that it takes, because often it takes a very simple corporate organization and yesterday’s technology.
This is what’s called a strawman, and it doesn’t exist and it isn’t real. You always have time, and if anybody is telling you that you don’t have much time, they’re lying to you, and they’re cheating you out of the time it takes to do it. It takes time to design difficult things. It takes decades to launch a spaceship, so for someone to come along and say that you don’t have time is…it’s just a lie. And when people lie to you have to say “Why are they lying to me? What’s in it for them?” So when your boss comes to you and says “This is a mission-critical product, and you only have two weeks to come out with it because we’ve got to ship it really quickly,” you’ve got to say “Who’s making a lot of money off of bad practice?” Because somebody is. How long does it take to make a really good interface for a spaceship? I’ll let you know when I’m done. That’s how long it takes, okay?
If you don’t have spacemen among your friends, it means that you have to go and find spacemen. And you don’t necessarily need to make them your friends, but you need to find them and you need to understand who they are and how they think and what they want and why they want it. And it takes a while to learn that, but when you know that, you’ll find that it’s not that difficult to create solutions for them. How long does that take? I’ll let you know when I’m done. But why are you in a hurry to send a spaceship with a shitty interface into space? That’s stupid, and only a stupid person would ask for that.
But you’re not stupid, and you have posed this question to me as though this kind of stuff happens all the time. So what that tells me, because I know you’re not stupid, is that there are people who are doing this to you because they are getting some benefit from treating you like shit. What is the benefit, and who are these people? People who invest money in these companies always want to tell the companies how to behave. There are people in Hollywood who invest millions of dollars into a motion picture that they’ll never get back, but they get to go to parties with movie stars. There’s always a reason for why they do that — they don’t care if the movie is a success or if it’s any good, but they want to party with the stars. Now, that’s not a bad business to be in, making bad movies by taking money from people who want to party with the stars. But if you’re in that business and you find yourself wondering why you didn’t get nominated for an Academy Award, that shows naivete on your part.
So, why would somebody who wants to create a product want you to hurry? Why are they in a hurry? There’s a reason why, and you have to ask yourself what that reason is. If they tell you it’s because they’ve already started construction, well that’s the silliest thing in the world. It’s like saying we’re in a hurry so we started exiting the airplane before we landed. Why would you do that? It doesn’t get you to your destination sooner. But someone makes money or gains something off of that premise, so I reject that premise — I don’t think it exists. I think there is some other reason. There’s never a case where you lack spacemen and you lack time…those are manufacturing things.
There are interfaces that I’m very much interested in working on that no one has asked me to do — it’s a category of interface that I like to call “good.” Nobody ever asks me to do that. I mean I hear the words all the time, but then I ask when the product ships and they give a ridiculous timeline, and then I know that they’re lying. We all know how long it takes to make something good, which is — we’ll know when we’re done. I can’t tell you how long it takes to create a good interface, I can’t tell you how long it takes to create a bad interface, but I can tell you it takes just the same amount of time to create a good interface as it does a bad interface. So what idiot ever hinted that creating a good interface would somehow take more resources than creating a bad interface? You don’t see any evidence of that.
Actually, I am working on some new stuff. I think that in the technology industry we’ve recently come to see that our work, even really good and well-intentioned work, can be misused by bad people. We see things like the United States election in 2016 being compromised by people who very conscientiously used social media as a propaganda weapon or Volkswagen using a software as a way to cheat on emission tests. The problem is that these things are being done by organizations, by groups of people, more than they’re being done by individuals. So this is not a matter of finding one bad person, but understanding that this bad behavior is often the result of thousands of individuals, each one doing something good, but the net result is something bad.
So what I want to do is understand the problem from the point of view of those thousands of people that are doing good. How can they look at their work and say “Is this good thing that I’m doing actually a part of something larger that is bad?”
How can I identify that? How can I assess the risks of misuse? How can I describe it to people in such a way that we can protect ourselves from it? In many ways, as we build complex technological systems, we’re building weapons that are going to be used against us. So if you’re a developer and your job is to create a really efficient and effective algorithm to do something, you, of course, are asking yourself questions about efficiency and effectiveness. But you also have to ask yourself questions like “Where is the exposure to abuse? How could this be used for evil purposes, and how can we protect against it?”
So those are the questions that I’m asking. As a toolmaker, I’ve been trying to create tools that will allow me to address this problem, to formulate these questions. In my talks, I’ve been making the assertion that you need to ask yourself “How can I be a good ancestor?” The more I ponder this question, the more I realize that it’s a good fulcrum. If you ask yourself this question and the answer is “yes,” then that’s a good thing. If the answer is “no,” or “maybe not,” or “I’m not sure,” then it means you have to do something about it. It turns out that it’s a powerful tool, but it’s also kind of a blunt instrument…so how do you break that down into smaller pieces?
It turns out that you can break that down into smaller pieces, and I’ve had some success thinking about that and formulating a strategy for inquiring so you can ask yourself questions like “What assumptions am I or the people around me making as we work on this? Are we assuming that people know how to use this? Will they only use this in a certain way, or will it only be in the hands of responsible people?” Examining those assumptions gives you some insight into whether you’re exposing what you’re working on to misuse or if you’re creating misuse.
We also look at what we called externalities, and it’s a term that we borrowed from economics. An externality is something that, even though you may affect it or it may affect you, it’s not something that you choose to consider. So, when I’m done with this piece of paper I crumple it up and throw it out, and at the end of the day every day someone empties that trash can into a bigger trash can and then he takes that and dumps it in the back of dump truck and then the truck takes it 30 miles down the peninsula and dumps it in a huge landfill. Now it’s away and it’s out of my concern. Except that that landfill is not really away or out of my concern, it’s just out of my concern until somebody has to come along and deal with it. And somebody is going to have to come along and deal with it, which brings me to the third consideration of how you know you’re being a good ancestor—what is the time frame that you’re using to look at things?
Stewart Brand says you need to look at things in a 10,000 year span, but nobody in the human race has really ever looked at anything even approaching a 10,000 year span. We tend to look at things in a 5 to 10 year span as some hellishly long-term thing. So I can put the trash in the landfill and not worry about it for the next 5 years, 10 years, 20 years, even the next 50 years. At a certain point somebody has to look at it, and that somebody is going to be my child, or my child’s child, and if what they have to do is deal with my trash, then I’m not being a very good ancestor.
Design is a craft. A craft is something you do with your hands and your mind, and it is something that you learn by doing. So we (Cooper) have long established traditions of teaching craft, of skilled practitioners teaching beginning practitioners, and expert practitioners teaching skilled practitioners. Classroom study is not really a very good tool for that, although there is a lot of simulation you can do that works. Design is very much like a studio craft. It goes back to what I said earlier about the importance of critical thinking. You need to have a set of principals and a set of tools to work from. It’s not about flat interfaces or hamburger menus, but about being able to understand what the real problem or challenge is, and the way to learn that is by confronting students with real challenges and working with them to break them down into their component pieces and understand them. You learn by doing, it’s a craft.
The kind of interaction design that Cooper does, I call it Alexandrian design, is design with an externally defined end purpose. It’s not design that makes you feel good, it’s not design for aesthetics —it’s design in service of moving somebody else towards their goal. So it’s a matter of understanding who that user is, what their goals are, why they want to achieve them, and then going back to first principles and thinking about how to get a user towards that goal by satisfying that motivation. That’s not the kind of thing that can be taught from a knowledge perspective. That’s only the kind of thing that can be taught from a craft and practice perspective. You need to learn by doing it, and you get better by doing it over and over and over again. And you’re taught by people who are better at it.
You can visit the lecture of Alan Cooper in Kyiv on the 10th of February at the KRUPA UI/UX conference. For more details and tickets go via the link.
Русская версия интервью доступна по ссылке.