Google Cloud NEXT '17 - News and Updates

Security First for a Mobile First Strategy (Google Cloud Next ’17)

NEXT '17
Rate this post
(Video Transcript)
ADRIAN LUDWIG: My name is Adrian Ludwig. I work at Google. I'm the director for Android security. Let me be the 57th person to welcome you to Next. Thank you all for coming to what is my hometown of San Francisco. I'm excited to have you all here. We have 59 minutes and 24 seconds to chat a little bit about Android security. So I'll probably spend about 45 minutes or so going through content, and then hopefully we'll have some time for questions towards the end to be able to go through. And would welcome any questions that you might have, and make sure that we get you the information that you need to be making good decisions about what you're doing in your environment. So I'm going to talk about Android security, which is a big topic. We throw around numbers in the world of Android that sometimes boggle my mind. You know, 1.4-plus billion active users is an incredible statistic. And that number grows every day by amounts that you can't really fathom. The number of people in San Francisco doesn't begin to approach the number of new users that we add to Android as an ecosystem on a daily basis.

And so it's really an honor to be able to talk a little bit about what we've been doing at Google, but also talk a little bit about how we fit into the overall ecosystem in terms of protecting Android users. Because we're just one big but still very small part of how it is that we're approaching protections in the Android ecosystem. And so where do we really fit in? That's a good way to start thinking about Android, because it's very different from other platforms. Google is a shepherd or steward for an ecosystem. We're not the provider of the platform in a strict sense. We're not the maker of all of the devices that get used by users. We're not the maker of all of the applications that get used by users. And maybe this part's obvious, we're not all of the users. And so there's a lot of different parties that are involved in thinking about how to have security for an ecosystem of this scale. We see our role as facilitating application developers by providing them with a platform that is safe by default, that has APIs that allow them to build very high security applications, or even if they're not thinking about security, to be able to benefit from the security of the platform.

We see users as basically not wanting to think about security but having an expectation that security will be provided. And so we've designed the devices to be secure by default. And then we've built layer upon layer of protections in addition to that that are really transparent so that users don't really need to worry about security. And then on the device maker side, we work very closely with partners– large companies, small companies, individual developers, to make sure that the devices that get out into the ecosystem have security protections. And there's a variety of tools that we have. And we will release to open source. But then we do things like build the compatibility test suite, and a compatibility definition document that can define the security model across the Android ecosystem so that we know there's consistency across the ecosystem. And then one of the things that we've been investing a lot in that I'll spend a few minutes towards the end talking about is making sure that devices are receiving security updates.

And optimizing security updates in an open source environment turns out to be a pretty interesting and complicated problem. The security industry and sort of the software industry spent 10 or 15 years figuring out how to deliver security updates for closed source platforms. And Android has been, I think, at the forefront of figuring out how to do that for open source as well. And so we'll talk a bit about that. The way I tend to think about the problem, because it's so big, and because it's so diverse, is to focus on three different areas. The first is on the platform itself, the code that people think of as Android. And so we'll spend some time thinking about that. Then we'll get into services, things that are delivered as applications, most of them by Google, but also by third parties that add additional security protections. And then the last is what we've done to work with other parties in the ecosystem to make sure that friction that can sometimes impede security doesn't get in the way of security being delivered, and in particular, around updates and how we're enabling that.

So my background, for people who are curious, is in creation and delivery of exploits, going back quite a ways. And so I was sort of on the more offensive side of security. And one of the things that was really interesting from the offensive side of security is if you look at security primitives, basically 35-ish years ago the US government put together something called the Orange Book. And there are similar types of things that were put together by other governments– that have been put together in other places as well. But there is a fairly well-defined set of expected security primitives that would exist on a secure device. And what's interesting is it's taken a really long time for the software industry to evolve to a point where it's actually providing all those primitives. When I think about Android and its sort of phase in that evolution, I put it right next to what I think of as the other mobile operating system. So Android, iOS, Chrome OS, all uniformly provide the same primitives, which are the ones that were described almost 35 years ago in Orange Book and these other manuals, but that just took a really long time for them to get broadly adopted.

And those are isolation of applications, making sure that there was cryptographically strong device integrity that was rooted in a hardware route of trust. Then there was something interesting that happened in those intervening 30 years, which was the realization that there were bugs in software. And so one of the areas that we've invested very, very heavily in is what we refer to as exploit mitigation and attack surface minimisation, which is technology is designed to make sure that, even if we've made a mistake– because let me tell you, we make a lot of mistakes. Google is very good at making mistakes. That's why we have launch and iterate as one of our philosophies. That's why updates are so important for all these platforms– but even if there is a mistake, hopefully exploit mitigation can make exploitation of that in the real world more challenging. Over the last couple of days we've been looking very closely at a number of vulnerabilities that we didn't know about before that became public.

And we've been sort of analyzing those and thinking through– how did exploit mitigations prevent these from actually being able to be used in the real world? And what are the technologies that we need to be continuing to invest in so that even if there's an issue that we don't know about, we can prevent it from being a successful exploit? And that's an area that we've invested in a lot. Management basically comes down to enterprises and applications. We want to know what the state of a device is. And so we've been investing a lot in making sure that those APIs are available as well. When I look at Android– and the other mobile operating systems for that matter– I see, basically, complete implementation of all of the best practices at a platform level. So we're actually in a really good place across the mobile operating systems. That's very well and very substantially differentiated from either server infrastructure or from other desktop environments. So we're in a pretty good place.

One of the interesting points, though, about the mobile operating systems is the velocity with which new technology can be delivered. So I want to talk about one of those new pieces of technology. Encryption is a feature that has existed on Android for, I think, about six years. It was on devices, but it was not enabled by default. Users were able to turn it on. Enterprises demanded that it be present. It was in the checklist of things that had to be on a device in order to deploy it into an enterprise. But the reality was nobody turned it on. That began to change roughly two years ago with the Android 6.0 release where the default configuration on Android devices was to enable it for devices that were capable of– from a performance standpoint– of having it functioning and not disruptive to the user experience. And so in the span of about two years we went from basically nobody having encryption enabled on Android devices, even though it was available, to having over 80% of new Android devices that are coming out be encrypted.

And so that adoption rate I think is one of the unique characteristics from a security standpoint of mobile– is if we find some other technology like encryption that is really important to defend users, you can expect that it will be adopted very, very quickly. And this is turnover rate, not in a sort of slow– or not in a highly managed environment. This is for the overall ecosystem. This is for hundreds of millions of devices and how quickly those security features can be rolled out. So having this sort of velocity of security innovation, I think, is one of the things that's really attractive about the mobile platforms at this point. It's also one of the reasons why investing in innovative security research is really important because– this sounds trite, but we're running out of areas to invest. We have encryption. We have verified boot. We have good isolation. We have good sandboxing. We're basically getting to a point where we have hardware route of trust, where what is really important is the flaws and the bugs.

And so this is an area that Google– and in particular Chrome– we're very, very innovative about four or five years ago in saying, we're going to deploy bug bounties. And Android at this point is one of the largest bounty programs in the world. One of the unique characteristics of the Android bounty program is the amount of investment we're seeing in terms of researchers in areas of the world that historically have not been prominent in security research. I'm hinting at China, specifically, where we're seeing a significant number of the researchers that are looking at Android and finding vulnerabilities on Android are based in China. So we've tapped into security research and security innovation that really is not visible to any of the other platform providers, which is pretty interesting. I have no doubt that worldwide there are people looking for the bugs, but only on Android are they actually then reporting them into the vendor and allowing us to make the platform more secure.

So those are some of the things that we're doing at the platform level. I want to spend a few minutes talking as well about the changes and sort of work that we've done on services. And to provide a little bit of context, services historically have been something that is left to a third party. If you look at the desktop platforms, for example, Microsoft, Apple, and the other desktop platform providers said, let's create an ecosystem of third party vendors– Symantec, McAfee, Kaspersky, et cetera, et cetera, right? Dozens, if not hundreds, of those security research and anti-virus and security providers. And that was a model that worked relatively well. But it had significant gaps in terms of the coverage that those solutions were able to provide, and in terms of the ability for those solutions to influence the direction of the platform and also respond to knowledge that the platform provider had. So about five to six years ago when we were looking at Android, we said, you know what?

Rather than having that capability be entirely external to the platform provided by third parties, let's make sure that we provide a best of breed set of security services integrated with the applications that we're putting onto these devices. And so Google Play and Google Play Services incorporated a suite of security services and made it available for all devices that have the Google applications on them. That's a very different model than what we've seen on other platforms in the mobile space, where instead, those other platforms have said security services are basically forboden, or are very significantly impeded. They're not provided by the platform provider. And in fact, it's very difficult to implement on those other platforms. And so Android has taken a very different direction there. And it's one that, I think, over time is going to be really, really important. So when I talk about security services, and I've given you sort of that framing, here are the types of services that we're providing right now, out of the box, on any device that has Google Play on it.

The first one is what we call Verify Apps. Security is not an area where we've had a ton of marketing investments. So these are not super sexy names. We need to add some X's and Y's and things to make these more exciting. Verify Apps does exactly what you would expect. It verifies the apps that are being installed onto the device. In addition to checking the certificate that's used to sign the application, Verify Apps checks with Google at the time of installation to see whether this is an application that exposes the user to a heightened risk. Is it– I'll use the sort of desktop nomenclature– is it a virus? Turns out on Android we don't have viruses for various reasons, inclusive of the fact that apps are signed. And so the viral mutation that is the heart of why we call things viruses doesn't apply. But is it a piece of malware? Or is it something that could be a potentially harmful application? And so every Android device that has the Google applications on it does these real-time checks at the time of installation.

And then, again, in the background on an ongoing basis we check all of the applications on the device to see whether or not it could be a potentially harmful application. That has incredible consequences, which we'll talk about it in just a second, in terms of how clean we've been able to keep the ecosystem. Another thing that we have on Android devices is what we call the sensor network. Sensor network is exactly what you'd expect. It's a network of sensors on these devices that checks for non application-based security risks. So beyond verifying the apps, we also are looking at is this device connecting to a network that's potentially hostile? Are we seeing attempted exploitation of media file formats on these devices? And I'll talk a little bit about what the consequences of that is. So we've got that across a large number of devices. Developer APIs, I think, are probably one of the things that are even more interesting for you folks, which is taking the information from Verify Apps and the sensor network and exposing that so that you can incorporate the intelligence that Google is generating about what's going on in the ecosystem into your own security model, and into how you're thinking about those devices and those applications.

So you can basically say, I don't want to have my app at work if there's ever been a piece of malware on this device. And Google's going to provide the services for you to be able to do that. And the last service that we have there is finding your lost device, right? Or being able to respond in the event that a device is lost or stolen. The net of these services is incredible visibility into the ecosystem. And to give you a flavor for it, we've got over a billion and a half devices that are protected right now with these services. They're checking in on a very regular basis. So on average it's about once every other day, where we're doing about 750 million scans of the ecosystem, individual devices around the ecosystem on a daily basis. And the number of applications that we're looking at is also incredibly large. Because that's one of the largest concerns that exist in the ecosystem is what are the applications that are there. The result of that is that we're able to manage the level of risk in the ecosystem quite effectively.

This is a graph going back to 2014– so almost three years at this point– showing the rate of devices having what we classify as a potentially harmful application installed on a device. This is a malware rate, to put it quite bluntly, for the world. And I want to call that out– most malware rates that exist are from highly managed environments. If you're in an enterprise, you might be looking at– what's the frequency with which a piece of malware gets onto my Mac fleet, or gets onto my iPhone fleet, or gets onto my Windows fleet? For the desktop platforms, those numbers often are on the order of 10% or so. A managed environment with users who are– have active anti-virus running on their device, on their desktop platform, they might even have white listing. There may be other characteristics or security defenses that are in place on those devices. But they still have a malware rate that's roughly an order of magnitude higher than what we've seen on unmanaged Android devices worldwide.

So a very different statistic. And we see sort of roughly an order of magnitude cleaner. In fact, if you look at something that's a little bit more similar to what your enterprise fleet might look like, it would be the second number, which is for Android devices that are only installing applications from Google Play, what do we see as their malware rate? And here it's actually about 120th of the rate that you see for the average across the ecosystem. It's 0.05. So a full order of magnitude cleaner. And that's like the basic, most simple management that you can do is to just say only allow installation of apps from Google Play. If you take additional steps that are provided through EMMs, like white listing or filtering the applications, you could imagine that you could cut this down even more substantially. So we're in an incredibly clean place for the overall ecosystem, which is great if you're doing Beta C or delivering applications directly to consumers. And if you're talking about managing within your own fleet, you're able to get much, much cleaner than that as well.

These scans also give us visibility into exploitation of issues that are found out in the wild. We have a sensor network that's been delivered out to over 1.4 billion devices. And we're able to look back both in real time but also historically to understand the level of risk that devices might be exposed to. This is a chart. These are the three vulnerabilities that have received probably the most attention on Android over the last four years. They were sort of the marquee Android vulnerability at the 2013, '14, and '15 blackout conferences held in Las Vegas in sort of the middle of the summer. So master key, fake ID, and Stagefright sort of in order. Two of these are vulnerabilities that specifically affected the verification of signatures at the time of installation. And so we have perfect visibility for devices that have Verify Apps and Safetynet turned on into the level of exploitation, because we actually checked every single install that happens on those devices through Verify Apps.

So we can go back and we can with confidence say there was zero exploitation before those issues became public. And that even after they became public, the level of exploitation that took place was extraordinarily low. And then you can contrast that to the level of communication that existed in the media. Often what I like to do is look at our data and say, did more people read about how scary this thing was relative to the number of people who were affected? And almost uniformly that's going to be the case. There are a lot more people that are frightened by these things than are actually affected by them, which puts us in a pretty good place. Stagefright was a watershed, I think, for the Android ecosystem at large. In terms of the effect that it had, it ended up driving much more rapid adoption of security bulletins and security patches in the ecosystem. We'll talk more about that in just a second. But what's interesting about it is here we are going on a year and a half after that issue became public.

And there still are no confirmed instances of any exploitation. One of the things that was really fascinating over the last couple of days is taking a look at a new trove of security vulnerabilities and realizing that none of them affected media server, none of them referred to Stagefright. So despite the level of attention that was focused on that particular issue, it doesn't seem to have had an actual effect in terms of the user security that's out there. Again, not claiming that it didn't. I'm claiming that we haven't seen it, which is, I think, pretty interesting that the distance between what's happening in the ecosystem and the level of communication around risk that exists in the ecosystem. So if you're interested in the services and getting a deeper dive into how those services work, Edward Cunningham, who is the product manager for those services, is going to talk tomorrow. He's going to go into how we're using machine learning to detect specific types of attacks that might be taking place against users, some of the work that we've been doing with neural networks and detecting those, and talk through a number of areas of investment that we're making.

So I think that is actually one of the most interesting things that's going on is those advancements and services. The other area that I want to talk a bit about is ecosystem updates. And there's two different areas that we're investing in. Google, obviously, has an obligation to build the most secure version of the platform that we possibly can and to get that out to as many people as we possibly can. But there are two other parties that have an impact on the security of a device. The first is application developers, and let me talk a little bit about my experience as someone who came from the world of application developers. Prior to being at Google, I was working on security for desktop applications, head of security at Adobe. And what was interesting to me in the Windows ecosystem is for the first handful of years all of the focus on security was at the operating system level. And then circa 2004, 2005 there was a transition where exploitation shifted from the platform itself into the key applications.

And what were those key applications? Adobe PDF and Reader, Adobe Flash were two of them. The browsers were another one– Firefox, IE. And Chrome wasn't there at the time but quickly became one of the areas that was the focal point, and it is still today. We also began to see a lot of exploitation that was targeted at other document formats. And so what was interesting was no matter how hard we had made the platform looking at Windows, there was this shift in where the weak spot was. And so on the Android side, we wanted to make sure that that shift doesn't take place. And so we're very proactively beginning to look at application developers and make sure that they're in a much better place. By default, since most applications on Android are written in Java– which is a typesafe language and a memory safe language– by default, they're already in a better place. But we wanted to see if we could do more than that. So the approach that we took was beginning to use the scanning services that we had built to find malware and repurposing that.

And so the repurposing that we did was to look for basic vulnerabilities in applications. This is a picture of the set of new technologies that we introduced where we were looking for vulnerabilities over the last year. They range from things like libupnp and Apache Cordova where what we were looking for was an old version of a library that had a known vulnerability. We scan all the applications that are in Google Play, and then we notify those developers, hey, you're running an old version of this. In many instances, the developer's like, hey, thanks. I didn't even know that. And they immediately fix it. In some instances, the developer's like, clueless, hasn't been doing anything, isn't paying any attention, and the app continues to exist. The next time that they go to update their application, add a new feature, ship a new release, get new users, they're told you can't do that until that application has had the vulnerability that we identified fixed. And so this has ended up having a very significant sort of– what I think of as raising the base level of security that exists in the applications.

So the simplest one is something like checking for a vulnerable version of a library. There are more sophisticated ones as well, like the SSL Error Handler, where what we were looking for is whether an application is handling SSL errors in a way that's safe. If you check to see– you've enabled SSL on a particular network connection, you get back an SSL error. If what you say is, that's OK, just connect, that's probably not what was intended, or you wouldn't have turned on SSL at all. But it's also a very common thing that a developer will do in a testing phase for an application, right? They'll say, I don't really have a current certificate on my test server, and so I'm going to allow the connection to go through. And so what we've done here is we've looked for those types of instances where the default handler is one that's allowing all connections. We flag the application. We tell the developer, look, this is the problem. And then they have the option of either disabling SSL, if that's what was their intention, or fixing it so that the SSL Error Handler is actually a sane SSL Error Handler.

So there's a variety of different levels of sophistication on the types of warnings that we're able to give to developers. I mentioned that in some instances we're using sort of more strong enforcement mechanisms through Google Play to try to make sure that these devices or that these applications are being brought up to date. And so that's had a pretty significant impact. This is a graph of the number of applications that have been fixed as a result of us delivering warnings to them. So in 2016, almost 300,000 unique applications that had their security improved. So incredible scale in terms of being able to deliver security protections at the application level. So application developers are one area of focus for us. Another is around devices and the devices that are being built using Android software. As you may know– I expect many people do, but there probably are a few that don't– Google doesn't ship software on devices, with the exception of the Pixel and the Nexus devices.

So roughly 1% of the Android ecosystem is binaries that was– it consisted of devices that have binaries that were built by Google. The other 99%, an OEM takes source code from an open source project, compiles that, and then delivers that as the software on top of those devices. They go through certification, and CDD, and a variety of other compatibility requirements in order to be able to make sure that the devices work well with the rest of the Android ecosystem. But ultimately it's the device maker that is responsible for delivering the software onto those devices. And so one of the great challenges is how do we make sure that the devices that are out there have the most recent version of the software, and that any changes that might have been made by the OEM don't negatively affect security on those devices? And so we work really closely with device makers to make sure that they're doing that. One of the other things that we work with are carriers. So this is a picture to give you a rough sense for how many carriers there are worldwide that we're working with on an active basis.

So there are probably a few more than this. This is a snapshot taken not too long ago. But there are 351 different carriers that are out there in the world. Carriers– in the United States we're talking about AT&T, Verizon, T-Mobile, Sprint, and a number of other ones as well. Carriers play a very significant role because often they are the source of purchase for a device. You go into a carrier store, you buy a device. They also have requirements about the functionality that exists on those devices. They're very strongly involved in the delivery of software on those devices. They require applications that might be installed. They require configuration to work on their network. In some instances, the network protocols are different. CDMA versus GSM– although that's tending to break down at this point– it was one of the variations that would exist within the US networks. And so we're working with these carriers to define what software is going to be on these devices and what the requirements are for it.

Now, historically, cell phones did not get updated basically at all. And there was a very strong resistance to updating software on those devices because of stability at the network level. And so the carriers, for many years, had been defining a testing regime around software for these devices that was designed– correctly, I would argue– to minimize the rate at which device updates could be delivered to those devices. They didn't want devices to be changing because change created instability both at a network level, but also at an individual user level. If the software changed, that was bad. There are other factors as well around congestion and consumption of bandwidth. And so in the Android environment we're dealing with a software stack that's much more complex. We're dealing with vulnerabilities that are being found. We talked about Stagefright and Master Key and a few of the other ones earlier. And so there was a bit of a cognitive dissonance between the approach the carriers had adopted of minimizing the rate at which the updates could be delivered, maximizing the amount of testing that could be done, and the need to deliver security updates in a timely fashion.

So over the last year or so– but going back further than that. But for sure over the last year or so, we've been working a lot with the carriers to differentiate between updates to devices that were nonsecurity specific versus ones that were security specific. And so beginning of 2016 roughly, the average update to an Android device, at a minimum, from the point where an OEM said, I'm done with this update, I'm ready to ship it out to customers– until it had gone through carrier testing and approval was on the order of six to nine weeks. What was also interesting about that is usually that had to be scheduled another six to nine weeks ahead of time. So we're really talking about an 18-week period basically from when the OEM is done to when an update could make it into the hands of a consumer. So very difficult to deliver security updates in a timely fashion. Over the last year and a half, we've worked with the carriers– basically with all the major carriers, about 40 or 50 of them worldwide– to make sure that if an update was a security only update, then that could be delivered in less than a week.

So they've gone through and said, OK, if issues are in security bulletins, and if we're provided with the software from Google, then we're going to be willing to let those go through at a much faster approval process. And so we've taken advantage of that to make sure that we're able to deliver security updates very quickly to devices. So we've sort of eliminated some of the cost associated with it, but we haven't eliminated a lot of it. So the next thing that we've been investing in over the last year or so is in working with specific OEMs to try to refine what their development process looks like. And so I want to give you a little bit of a flavor for where we've been focusing on our effort. So beginning of last year, 2016, we sat down and we said, OK, we're trying to change an ecosystem that has over 60,000 unique devices in it to get it to a point where it's able to deliver security updates in a timely fashion. How are we going to approach that?

So we chopped off the top OEMs in the flagship devices for those top OEMs. And we said, we're going to focus initially on updating just those devices, the ones that you might be familiar with because you've seen them like, on a billboard, or they cost $600. And they're receiving a lot. Those are the ones that we think is the best opportunity to work with OEMs to refactor their approach. So we talked to the top five OEMs. We said, let's look at just your flagship devices because, remember, a lot of these OEMs produce literally dozens if not hundreds. So it ended up being about 29 device models across those five OEMs. The flagship in North America might be different from the flagship in Europe, might be different from the flagship in Asia. So there's a little bit of complexity there that may not be obvious as a consumer. I like to anchor this and say, at this point you're already talking about more device models than any other platform. You've already– like, we're not talking about the full range.

We're not even talking about a tiny fraction of the ecosystem. But just looking at these you're already talking about more complexity than any other ecosystem that exists in the world. Now, I mentioned carriers, and the fact that carriers like to have customization for devices. And so those 29 different device models when you sort of blow them up across all the different carriers and the customisations, what you end up with is– just looking at the flagship models for the top five OEMs– 5,000 unique builds of Android that are currently being serviced that are flagship devices. OK. That's 5,000 different development teams out there in the world. Think about that– it's 5,000 Git repositories. It's 5,000 QA teams. It's, I don't know, probably 10,000 different VPEs of engineering, all of whom need to be working in synchronicity in order to be able to deliver updates on these devices. And so what started as a really simple, hey, let's just update devices on a monthly basis, turned out to be a little bit more complicated than that.

And so we've been working very steadily with those top five OEMs and others in the ecosystem as well– but for the purposes of this we'll focus on those top five– to make sure that these 5,000 devices are being brought up to date. The sort of output, because who cares about the process? What you really care about is the output. The output is, the beginning of 2016, about somewhere on the order of, I say, less than 20%. I think it's on the order of 12% to 13% of devices were up to date in the last 60 days. So basically, the last security bulletin that was available was on those devices. That's just for flagships. The number was even lower for the broader ecosystem. By the end of 2016, we had brought that number up to over 50%. So on the flagship devices there was a very significant change. Over 50% of devices that are up to date. That's not– we're not done. I just want to make sure I'm expressing that. Like, this is great progress. 20 to 50– huge, right? The line is going up to the right.

But, you know, there's another 45-ish percent that we need to figure out how to get those patched as well. And then again, this is just for flagship devices. If you're looking in your environment, you can hone in a little bit on specific characteristics that may make your environment safer. The flagship devices that were brought in through top carriers in North America or in Europe have an even higher patch rate, over 70%. This does mean, by the way, that if you bought your device not from one of those carriers, then you're necessarily going to have a percentage that must be below 50% because they average out, right? So you want to think a little bit about where you're sourcing devices from and using them in your environment. You also have the ability to be more selective about which devices you want to purchase if you're in an environment where you're thinking about how to source them. One device that we recommend from Google is the Pixel. We consistently have an update rate of over 90%.

Patches are made immediately available. I can get you details about other manufacturers that also provide that as well. They're a little bit more sensitive about us talking about the devices that they're making and their update rates. But we can get you more information about that. If you have questions, just feel free to reach out to me and I can help get you in contact with somebody that can get you that information. There's a really simple way if you want to have an update rate that's a really simple, good one. We provide an API, which is build version security patch level. If you're an EMM, you can write a policy that says, I only want to be running on devices that are up to date. You can say 90 days, 60 days. I would recommend 90 days simply because there are a number of devices that– basically because your update cycle is on a 30-day cycle, roughly. And so if devices are more than a couple of weeks out of date, they'll very quickly– A, they'll almost every month end up around 30 days, and then B, it's not uncommon that they'll end up going past 60.

And so giving your users the opportunity to actually apply the patch, get it up to date, is the approach that we would recommend. So we generally say 90 days is probably the right approach. And so that gives you a sense for the progress that we've made. Before we close on that, I do want to say, I think over the next two years or so we'll continue to see an increase in broad based adoption of security updates around the Android ecosystem. That's my sort of forecast is that it's going take about two to three years before we get to a point where you have 80% of all the devices that are up to date. That it's taking time for the OEMs to ingest and modify their processes to be able to deliver updates on a more regular basis. But all the trajectory is in the right direction. The last thing that I want to mention is if data is your thing and you want to know a little bit more about Android security and the work that we've been doing there, every year for the last three years we've produced an annual security year in review.

It's about an 80-page document this year. It'll be coming out in the next couple of weeks. And I would encourage you to take a look at it. It breaks down all the malware data. It breaks down update rates, breaks down a lot of the features and investments that we've been making in that space so you can really understand what's going on across the ecosystem at large. And the last thing that I want to sort of note is I think often in the security space we focus so much on the technology that's sort of at the platform level or at the operating system level that we lose sight of the fact that, especially in the mobile space, we've really already begun to unlock business models that were not possible previously. And we've done that from a security standpoint, as well. This one cracks me up a little bit. This is a picture of people playing Pokemon Go in Japan. There are similar pictures from around San Francisco. I just couldn't find one that had quite as many heads packed into a little space all staring at their screens.

And gaming in particular on mobile devices has been an incredible driver of security. I think the security community often will sort of poo-poo it as, ah, it's just DRM, or, ah, it's just games. But this is a billion dollar business that depends upon the hardest threat model, which is that the person who has the device in their hand is trying to break it. And so the fact that they're able to build this business depends on two things– one, it depends on the core security properties of the platform, and then two, it depends upon the fact that they as developers realized that they needed to use some of the more sophisticated features that were available and services. So they were one of the early adopters of something called SafetyNet.Attestation where Google tries to assess whether this is a device that's currently operating in a good state or not, pass that information up to the application developer. So I think what's interesting is they've managed to do that and build that billion dollar business.

I don't have any expectation. Maybe some of you are building your entire business on mobile. Maybe you're not. But if you are, there are other people that are doing that as well. And they're scaling to billions of dollars, even in a very high risk environment. And that's something that I want to flag, right? Because I think, often, especially in security there's this reversion to change aversion that makes it impossible for people to say, oh, my gosh, I couldn't do this on Windows because there was so much piracy, and there was so much flexibility. But on mobile, it's actually not just possible, it's happening. And games are one of the primary drivers of it. Similarly, payment infrastructure is something that we've seen very rapid adoption of based on the security properties of the platform. The fact that we're able to go into major payment operators and say, trust the platform, here is the capabilities of it, is really incredible. And it's something that these folks depend upon trust zone, they depend upon hardware security in a way that others haven't been able to.

And I think that's one of the reasons why when you look at the most targeted highest risk users in the world, more and more we're seeing an acknowledgement that mobile devices are necessary for them to communicate, and investments that are being made are sufficient for them to be able to use these platforms. So it's a really exciting time for me to know that people who are the most at risk, whether that be world leaders, or whether that be dissidents who are being targeted, they can rely on the technology they have in a way that never was possible before from a security standpoint. So that's pretty exciting for me. So this was sort of the think about how it is relevant in your environment as well and how to take advantage of it.

 

Read the video

Security first for a mobile-first business. Android ships securely and offers multiple layers of protection such as 30-day updates, continual machine learning verifications, and a Work Profile designed to keep businesses safe and personal information private.

Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Mobility & Devices here: https://goo.gl/yl1EqP


Comments to Security First for a Mobile First Strategy (Google Cloud Next ’17)

  • CIA Vault 7 reveals Android users is not protected. Google should inform the users if they have provided a backdoor to CIA or other law enforcement agencies. If I know it is fine with me, but some do mind they lose their privacy.

    I. Sokolov March 13, 2017 1:35 pm Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

Loading...
1Code.Blog - Your #1 Code Blog