Google Cloud NEXT '17 - News and Updates

A look into usability design at Google (Google Cloud Next ’17)

NEXT '17
Rate this post
(Video Transcript)
[MUSIC PLAYING] ALLEY RUTZEL: Hi everyone. Thank you so much for coming. I hope you're having a great time at Next. My name is Alley Rutzel and I'm a senior manager of user experience at Google. I work on a new enterprise video meeting experience in Hangouts, which are going to be hearing a little bit more about tomorrow in the keynote. So I'm here today to give you a behind-the-scenes look into Google's usability design philosophy, and how you can apply what you've learned here at your organization I will also be happy to answer questions at the end of the session, there will be plenty of time. I think I timed myself and this only took 30 minutes. So, I really do want to hear from you. So if you have questions, please feel free to ask them at the end. So first, what exactly do I mean when I say usability? A leading expert in the field defines usability as a quality attribute that assesses how easy user interfaces are to use. That's pretty straightforward, but it's also an oversimplification.

The word usability can also refer to methods we employ for improving ease of use during the design process. And I'll talk a little bit more about that in a minute. But for now, let's talk about usability, and break it down into what I consider its six quality components. So learnability– how easy is it for users to accomplish basic tasks the first time they encounter a design? The more learnable a system is, the less time a user takes in order to understand how to do a specific task without having been previously trained or provided documentation. But how important is learnability in a design? Think about if you're designing a user interface for a kiosk in a mall. You want people to be able to go up to it and accomplish their tasks really quickly. They don't have to learn how to use the interface. You want them to achieve their goals and get out of the way. So learnability is incredibly important in that scenario. But if you're working on, let's say, a really complex enterprise business system for customer service management tools.

Then, the chances are that they're more likely going to invest more time in learning how to use the system and maybe there's training involved. So really the point is that learnability in that case is still important, but other components of usability may be a higher priority. Efficiency is another component of usability. Once users have learned the design, how quickly can they perform the tasks? Efficiency is typically measured by the number of clicks or keystrokes required, or the total time on task. But once again, it's important to understand your users and how they like to work. For example, are they likely to use the interface infrequently? Or are they going to be habitual users that are in it every day– maybe they're going to learn hidden shortcuts and controls. Keyboard shortcuts can be extremely efficient for the proficient users. But those that have to use the interface a little bit less often, if shortcuts are the primary interaction tool, they're going to be really slowed down by that because they're not going to understand what those are and they're going to have to learn them.

Memorability. When users return to the design after a period of not using it, how easily can they re-establish proficiency? Think about how often users will be coming back to your product or service. Obviously we all want them to come every single day, but chances are that they're not going to do that. Well, some people will, but most often they won't. For example, I have diamond status on Delta, but I don't go to the Delta site every single day. But when I do, it's important that I know where to find things. I want them to be in the same place that they were before. So obviously, design that's complicated is going to make it harder for users to remember the exact steps that they took to accomplish the task, which is why efficiency and memorability actually go hand in hand. If the design is more efficient, then there are going to be less steps for them to go through to accomplish their tasks and less things to remember. This makes it easier for them to reestablish proficiency, even after a long period of not having used the product.

So accessibility. I think this one is really important. Can all users, regardless of ability, navigate, understand, and achieve their goals equally, without barriers? In the words of Nicholas Zachas, a friend and engineer– former friend and engineer Yahoo, accessibility is not a feature. And I think that's really important to take into consideration. This means that accessibility is not an optional item on a list of product functions, but instead a mandatory requirement on par with performance. As such, every product should be built with accessibility in mind from the very beginning. While accessibility focuses on people with disabilities, many accessibility requirements actually improve usability for everybody. So accessibility especially benefits people that have– that don't have disabilities, but are in limiting situations. So imagine that time I broke my hand slamming it in a car door, and I wasn't able to use the device the way that I normally would, and I had to turn on accessibility features to be able to navigate.

Or even being outside in the sunlight with a phone, or a laptop, where you're limited by the contrast on the screen. So, there are a lot of things that we call situational disabilities that we need to be thinking about when we're designing our tools. Errors. How many errors do users make, and how severe are these errors? And how easily can they recover from them? Errors are inevitable. They happen. Unintended actions happen. Think about how many times you're filling in a form. Maybe you mistype your email address, put in the wrong password. A lot of times, when I'm filling out my credit card expiration date, I put the wrong month in. Then I get a big error, and I have to go back and redo it. But there's a difference between these types of slips and user interface problems. If users continue to click on a heading that's not actually clickable, or they try to look for something in the wrong part of your navigation system, then there's probably something about the design that we can improve.

One action is to take– one action to take is to make sure all of the elements in your UI are instrumented for performance metrics, so that you can understand what people are doing when they use your product. And if you don't know what people are doing, what errors they're making, you're not going to be able to fix it. So, a lot of times we just look at performance metrics for things that are pretty standard. But think about the elements of your UI that you want, you might not know that people are clicking on, but they are clicking on. The non-clickable things, like that header that I talked about. You might not instrument it, because it's not a click. But if people think it is, you want to know that, too. And finally satisfaction. Satisfaction, I consider is probably the hardest to measure. Satisfaction is about how pleasant is it to use your design. Often, how often have you been on a site, when all of a sudden, you're trying to accomplish a task, all of a sudden this satisfaction survey pops up, like a weird thing that kind of just jumps in your face and interrupts your flow.

First of all, that's not very satisfying. But think about it. People use that, businesses use that because they want to know what you think. So we have to find ways of asking our users what they think without being so annoying. You can do this in usability studies. A lot of people will actually use usability studies as a way to gather that qualitative information. And that's great. I love usability studies, we do them all the time. But sometimes we notice that participants tend to rate an interface highly on post-test questionnaires even when they fail to complete the task. And the reason they do that is for– a couple of reasons. One, the kind of want to impress you. They want, you know, hey, I'm here. I can really do this. It's great. You know, they don't want to say anything negative to you. Sometimes you're paying them to participate in the study, so they feel like they have to please you. But the worst one, the one that makes me cringe, is that they often blame themselves.

That it wasn't the UI's problem, it was my fault that I failed. And so they don't want to show that that was their fault. They want to say, everything was great. It was wonderful. It's not you it's me. So unless they're carefully worded, and another thing we use is surveys. But unless they are carefully worded, surveys can also result in what we call acquiescence bias. The fact that people are more likely to agree with a statement than disagree with a statement. So those satisfaction surveys or those 4C surveys, or whatever, you'll notice that they are carefully crafted so that they're not biased or leading. So it's hard, but it's not impossible. Good user researchers know the proper methodologies to use for measuring satisfaction, and I'll talk a little bit about how we do that in a minute. But now how we think about usability, the components of which are learnability, efficiency, memorability, accessibility, errors, and satisfaction, let me share a little bit more about our usability philosophy at Google.

It's pretty simple– focus on the user, and all else will follow. So how do we do that? How do we focus on the user? We do it by involving the user in every phase of product development. When we think about the phases of product development, we typically think of a temporal progression from idea to launch. Some of you may have seen this framework before, it's often referred to as the double diamond. During the discover and planning phases, we're trying to figure out what the right thing is to build. This phase is divergent and exploratory. It's a search for new questions, before we start to then synthesize knowledge into an insight. That's why we go wide before we start to narrow. In the remaining phases of conceptual design, detailed design, and build, we're focused on designing the right– designing the thing right. We take the insights that we learned in the first phase, and we explore several solutions. And then we narrow it down, and craft the working solution that we plan to bring to market.

This linear path helps us feel like we're making progress. But as you probably already know, if you've ever done product development, this is not how things typically go. This is what the process looks like for real. Yeah, this is more like it. The reality is the process is really messy. You have an idea, you try it out, you argue about it, you mock it up. Maybe you test it out on a few users. You realize that they don't like it, or there's something wrong with it, so you try something else. Then, maybe you realize that there are technical difficulties, and the back end can't support what you're doing and makes it unfeasible. So you have to loop back somewhere near the beginning and start again. Somewhere in there there's coffee and beer. But your goals are still the same. You want to design the right thing, and design the thing right. In other words, you want it to address user needs, and you want it to be usable. So let's dive into these phases and see how you can keep the user at the center of this messy process, how you can focus on the user.

So during the discovery phase, you wanted to uncover opportunities for design improvement. If you have an existing product. If you have a brand new product that you're just working on, you also want to start looking at the market and understanding the needs that are out there that aren't being fulfilled. This phase should be a really close partnership between UX, engineering, and product management. But you should also involve operations, support, marketing, sales. Any other specialists in the organization that talk to your users, so that you can gather as much data as possible about them. The goal here is to understand user needs, understand who your users are, and how they currently use your product. Quantitative research like metrics and log analysis really helps you understand what of people are doing, what users are doing when they're using your product. But qualitative research, such as ethnographic studies, which is going out into the field and actually observing users in their natural environment.

Sounds like Dian Fossey, but it is. It's about going out there and meeting them where they work, seeing the environment that they work in. That helps you uncover the why behind the what. And sometimes the why can surprise you. So I'm going take a little aside and tell you a funny story that my husband told me long ago. He's a– my husband's a fiber optic engineer. And there was a time when a customer complained about temporary outages that were happening around the same time each day. And so he would look at the data, and he would see, yeah, there's a brief outage, and it seems to happen in the afternoon. And they ran tests and diagnostics and they couldn't really figure out why this was happening. He sent technicians out to the field to look for those usual suspects. Apparently squirrels like to gnaw on the wires, and that can cause outages. So they looked for little frays and everything like that, they couldn't find anything. So he was becoming really frustrated, because he couldn't figure out what was wrong.

So what he decided to do is he sent the technician over to the customer's site one day, and he went out in the truck with another technician, and they just kind of went around the area where the outage was happening. It's hard to pinpoint exactly where it is. And something happened. He noticed a little boy, about 10 years old, walking home from school. He had a stick in his hand. And as they were walking, as he was walking down the street, he approaches what's called a down guy. And that's– when you think about a telephone pole, and the way it anchors to the ground. That angular wire, that's called a down guy. He would take that stick as he approached it and he would give it a good thwack. And what was happening was that the vibration of the wire was interrupting the light path of the optical cable. And so while it was vibrating the outage happened, and then it would stabilize and the outage would stop. So sure enough, moments after the kid thwacks that line, my husband gets a call.

Customer has an outage. So he went to the kid, and he was like, do you do this every day? The kid is like, yeah, I like the sound it makes when I hit it. Because it does, it makes this weird hum. And he's like, oh my god, we figured it out. And so my point is that there are some problems that you'll never really understand until you actually go out in the field and observe what people are doing. So let me give you an example of what I've experienced on my own product in this area. So in 2013, Hangouts was a part of G+. Through logs analysis, we noticed that it was becoming increasingly popular among business users. So we reached out to them to find out more. We wanted to know why. We conducted diary studies and field research with local businesses, and based on what they told us and what we observed, we were able to uncover some interesting insights that lead to opportunities to improve the experience for them. One thing we saw was that there were basically four types of meetings where they used Hangouts.

The first is ad hoc. These are your typical pick up meetings. These are the ones like, hey, I have a really quick question. Do you want to jump on a Hangout and answer it? Let's go into this quick room, let me show you something. Round tables are what we think of as the more traditional, typical meetings. They are usually scheduled, sometimes they're recurring. Think about your team meetings, or where you go into– review something or solve a problem. Structured engagements are more like sales presentations or trainings. This is where there's usually one speaker and he's presenting to a group. And then spotlights, which are generally larger meetings. They're more like all-hands and earning reports, where you have one to many. They're often broadcast to a large audience that might be remote or watching online. Our customers are using Hangouts on Air for things like that. So once we got insight into why and how Hangouts customers were using the product, we were able to conceive of a new enterprised, focused take on video meetings that could better meet their needs.

So the second phase of the design process is– the product development process is planning. This is about clarifying and outlining nuances of the problem. The goals here are about taking what you learned, and deciding on the right thing to build. Some methods that you can use to help you decide are just prioritizing the user problems, or creating use cases and scenarios to help you understand when somebody does something. What are the scenarios they're doing it in, and what are they trying to accomplish? One thing that we also do on my team is what we call a strategy brief, or a one pager. And this is before the PRD. This is when your first sitting down and thinking about the problem. It doesn't get into detail, but instead it aligns the team on the high level goals, helps you identify and narrow down who it is that you're trying to– who is your user and who it is that you're trying to reach. It also defines what success looks like. Because without that North Star, you can often like your UX team, your PM team, your engineers might all have different and goals in mind, and you might not be aligned.

So we found a one-pager, very simple, sit down and talk about these things at a high level and make sure you're on the same page. In the- and the last method here is experience mapping, and I'm going to talk about that. In my last example, I mentioned that we decided to focus on the solving the user needs of the common roundtable meeting type first, because that was something that we felt most of our users were doing. But we didn't understand what exactly they needed. So to figure that out, we decided to create an experience map of a typical meeting. So what is an experience map? This is pretty simplified. But an experience map is a strategic process of capturing and communicating interfaces or interactions from a user's perspective. In this example, I'm showing a really simplified version of the experience map that we created when developing our new product. But what we structured the map to do is look at– when you think about meetings, there are different stages of meetings.

There's the before the meeting, during the meeting, and after the meeting. And on the y-axis we have what are users doing. What are the tasks that they're actually doing during those phases? What are they thinking while they're doing those things? How are they feeling during those actions? And then, what are those opportunities that the product can– what do we see as opportunities for solving some of those user needs? So, for example, in the before phase we found out that business users were often needing to take meetings when they were on the road. So, as you know, internet connections can be pretty spotty. So under Opportunities we decided to put, provide dial-in access to meetings, so that they could use their phone rather than have to rely on video. So you can also add a section for gaps, that's pretty common. So you can look at what the gaps are in your product versus the market. But this is a really helpful process for just really laying out the landscape of what your user is doing and around your product.

Since meetings are spatial as well as temporal, we also have to consider a variety of physical factors when creating our map. For example, before a meeting begins, you probably need to find the room. So what is the way–finding system look like? How confused are you going to be about getting there? How do you know how long it's going to take you to get there? And then when you're in the room, you need to use a remote to actually start the meeting and maybe control some of the things, like zooming and muting. So there are a lot of physical elements that we had to think about as well. A lot of times the remote goes missing in those rooms, and then what do you do then? So the activity of mapping an experience like this really helps organizations identify strategic opportunities, customer pain points, and helps us generate innovative products. It also helps you decide on where to focus your energy before moving to the conceptual design phase. So conceptual design is where you create the most apt solution for the problem.

It's where you– you're entering this phase when you understand what the problem is, but you don't know yet how to solve it. So from talking to our users and creating the experience map, we discovered that joining a video meeting can be a pretty painful process, but we didn't really know exactly how to solve that. In this phase, our goal is creativity. We really want to generate a lot of ideas. No ideas are stupid in this phase. And then we start testing and iterating on them. We want to design the thing right. A great way to involve the user in this phase is to do rapid iterative prototyping and quick usability testing. It doesn't have to be on end users. It can be with people on your team, it can be with the sales department, it can be with anybody nearby. You really just want to get ideas out there and try and validate them as quickly as possible as to which is the one that you want to move forward with. My advice here is to stay low fidelity. And I'm talking sketches.

Wireframes. Rough user flows. This is about the idea, not the execution. The lower the fidelity, the better, actually. Even when testing with users, paper prototypes are really helpful. We've learned that as the fidelity of the wireframes and prototypes goes up, the quality of the feedback actually can go down, because people are hesitant to give you negative feedback when they think that you've actually put a lot of effort into it. So sometimes, even if we work in high fidelity, we will dumb it down. We'll make it a little bit less polished, so when we put it in front of people, they feel a little bit more comfortable in providing feedback on the product. So here's an example of what we showed users early on. As I mentioned for video meetings, it's critical that attendees can join quickly and easily. So we shared our ideas for several different ways that they could set up a meeting. So our options to them were, hey, you can have it be totally open so anybody with a link to the meeting can join.

If someone is not invited or outside the company's domain, you can require them to knock to get in. Or you could choose to completely lock it down, so it's only open to people who are on the calendar. An invitation, no exceptions. So we tested these concepts, and people just couldn't grasp all the different options and why they had to make a choice. How do I know which one is right? What if I'm wrong? Why shouldn't I be able to set it up one way and change and the next time? So we wrangled with several text changes to try and make everything more clear. But ultimately, we decided to just come up with a simple default, where if you're not explicitly invited to the meeting, you can knock to get in. We found that users expect us to do the right thing. And this was a great moment when we realized usability research can influence the product direction. The detailed design phase is where you spec the complete end-to-end solution. So once we knew conceptually where we wanted to go, we entered this phase to nail down the details.

This is about getting the details right, so your final product is delightful and easy to use. It's about holding a high bar for quality, and about making sure that experience is consistent, so users have to relearn something they maybe knew how to do or should already be familiar with. An example that I can pull from Google is when you think about how you share something in Docs, Sheets, or Slides, you know where that is, and it always behaves the same way. So why reinvent the wheel, especially if it's part of a suite of products? This is where the designers on your team are doing the visual polish and the red lining– or what we call speccing out, down to the pixel, of how the UI should look and behave for your engineers to build. This is also a phase where you really want to make sure your accessibility spec is up to date to determine how screen readers are going to be able to interpret your pages. For our new video meeting product, during this phase, we also did an extensive audit of what we call critical user journeys.

And that's the last method on here. And I'll talk about those right now. So users don't think in terms of single features. They think in terms of accomplishing tasks. Tasks often involve multiple features, products, or product areas. Thus they journey through the hardware and the software. Simply put, a user journey is a task that a user wants to perform, and a critical user journey is the really important journey that you have to get right. At Google, we identify two types of critical user journeys. Toothbrush journeys are what we call the common tasks that people perform often. And pivotal journeys are less frequent, but also important tasks, such as when you're asking someone to sign in or use the product for the first time. They're not likely to go through that again, but you've got to get it right the first time. In my example, our journey was as a first time user, I want to join my next meeting right now. This journey forced us to take a step back and think about what it's like to experience our product for the first time, and acknowledge the options for joining could be a little overwhelming and confusing.

Critical user journeys frequently cross product and feature boundaries at a minimum– at a minimum level, you should be aware of how what you're working on fits into the bigger picture. With video meetings, for example, we had to think about how this integrated into calendaring and chat systems, so that people could quickly move from chat to video, or move from a calendar into a video. We wanted to make sure the experience is going to be excellent, for our users, so we spent a full week going through our critical user journeys and doing an audit to document the potential pitfalls. We scrutinized 30 journeys just like this one, and we looked at them across all devices. We looked at the iOS and Android apps. We looked at the TV experience, when you're in a conference room, we looked at the web. We also looked at it through a number of different lenses. We had people looking at the visual design aspect, the interaction design, the motion, the audio cues. And accessibility. And writing.

We even had our writer take a look. So every single person was doing a pass to make sure that aspect of the experience was the best that it could be. We filed bugs even if a visual design was a pixel or two off. We were pretty strict. So you can really do critical user journeys at any time during the design process. Ideally, you want to actually start thinking about what those critical user journeys are early, so that you're part of– maybe when you're doing your user stories and scenarios. But in our case, we had defined them in the planning phase, but we did the audit in the detailed design phase, before we were ready to start going to launch. And then finally the build phase. This is where we implement, test, and evaluate the solution. The goals of the build phase are about pride in craftsmanship. Attention to detail. Testing and launching. And assessing the implementation once it's launched. As for the methods, you want to make sure that you've instrumented the site using tools like Google Analytics, or whatever your preferred one is, for performance and usability metrics.

Remember, when we talked about the components of usability in the beginning, you want to be confident that you're capturing data like number of clicks, errors, and more. You also want to run experiments, such as A/B tests, to see which design is performing better for users. But most importantly, you want to engage in what we call data informed design. Not data driven design. When you are data driven, you're only following the quantitative results of your research. The famous example at Google is the 41 shades of blue, when we tested, literally, 41 shades of blue links on the search page to see which one performed better. You can look it up, I'm not kidding. And, that's great. We saw actual, different results when we looked at all the different shades of blue. But that's a bit soulless. It's not the way that we want to think about using data in our design process. When you're data driven, you only follow– you're only looking at the hard data, but that data can sometimes be misleading.

Because it will tell you the what, but not the why. And understanding the why is where you are able to take action. Instead, use the data and combine it with qualitative feedback to assess the complete picture. This is data informed design. Quantitative metrics are also not often useful for evaluating the impact of the UX changes that you make. This is because they're pretty general. They don't usually directly relate to either the quality of the user experience or the goals of your product, so it's hard to make them actionable. So to help with this, a few smart researchers at Google came up with a method that we called HEART. The HEART framework is useful for evaluating the quality of the user experience and the impact of UX changes over time. Happiness– measures user attitudes. Often collected via those surveys we talked about. For example, satisfaction surveys, perceived ease of use, Net Promoter Score is a good example. Engagement– measures the level of user involvement.

Typically via behavioral proxies, such as frequency, intensity, or depth of interaction over some time period. Examples might include the number of visits per user per week, or the number of photos uploaded per user per day. For Hangouts, we want to know how often people are creating video meetings and maybe how long has meetings are. Adoption measures the number of new users of a product or feature. For example, you want to– in this one you want to look at the number of new users that are creating accounts. Maybe over a period of time, like accounts created in last seven days, or the percentage of new users who create video meetings. Retention is the rate at which existing users are returning to your product. For example, how many of the active users from a given time period are still present sometime later in the chat, in the same time period? You might be more interested in failure to retain, which is what we commonly known as– commonly known as churn. And then Task Success includes the traditional behavioral metrics.

These include efficiency, the time to complete a task, effectiveness, the percent of tasks completed. And the error rate that we discussed earlier. This category is most applicable to areas of your product where they're very task focused, such as doing a search or upload flow. So we're often asked why measure adoption retention when you can just count unique users? It's definitely important to count how many users you had in a given time period. For example, 70 actives. But if you measure adoption retention as well, you're going to be explicitly distinguishing new users from returning users. So you can tell how quickly your user base is growing or stabilizing. This is especially useful for new products and features or those being redesigned. Now, you don't necessarily need to create metrics in all these categories. You should choose the ones that are most important for your particular project. However, no matter what type of metric you choose, there's one important principle that you always want to stick to.

You want to ensure that the metrics are in line with the user goals that you defined for your product. So to wrap up, usability is about keeping the user at the center of your product development process. You want to consider the learnability, the efficiency, the memorability, the accessibility, errors, and satisfaction inherent in your product. I shared some of the methods that we use a Google, like experience mapping and HEART metrics, which I hope you found useful and you can obviously try in your own organizations. We've touched at very high levels on some of these methods, and so I encourage you to look and find resources on your own in the web. These are some of these are fairly well known, some of these are more inherent to Google's culture. But I really just hope that if you learned one thing here today, it's how important it is to involve the user when you're building products if you want them to use them, and trust them, and love them. And since you're all Google's users, you can also be a part of our design process.

So get involved. If you're a GSuite administrator or an end user of our tools, you can sign up to be a part of our GSuite user panel. If you're a developer or a cloud platform user, you can sign up as well for our Google cloud platform panel. I'll leave this up for a bit if you guys want to take a snapshot. I also want to use this time to promote a session coming up in the next room. I think it's room number eight. It's coming up right after this. It's titled Driving Product Excellence With User Feedback, which will go deeper into our research practices and our trusted tester and early adopter programs. So if you want to learn more about that, and the activities that we do around actually going on site and talking to our customers. That's a great session to go to. Late last year we used our trusted tester program to validate key features of our products, such as supporting larger meeting sizes, and making it easier to join meetings by phone or video. These weren't just UX issues.

They were essential for us to know whether we had a product market fit. We also invited several particularly engaged large customers to have a seat at the table during the design process. We visited some of those testers to watch how the product was used in action, providing valuable insights to improve our usability. So thank you so much for coming to Next and coming to my session. [MUSIC PLAYING]


Read the video

A behind-the-scenes look into Google’s usability design philosophy and how you can apply what we’ve learned to your organization.

Missed the conference? Watch all the talks here:
Watch more talks about Collaboration & Productivity here:

Leave a Comment

Your email address will not be published. Required fields are marked *

1Code.Blog - Your #1 Code Blog