They gather over 300 billion data points annually from a fleet of over 500,000 devices in their customers' cars and trucks. And they use BigQuery to help their customers make better decisions around fleet optimization and fuel economy. But like a lot of organizations, they need that control. As a quick show of hands, how many folks went to the IO 211 session yesterday, getting total control of your cloud resources? OK. About 15% of you. So you can take a quick 5- or 10-minute nap during the first section. No more than 10 minutes, please. I'll go over cloud IAM concepts, just so we all have the same baseline. Next, I'll go into specific pieces of Compute Engine, and how they work, and how it affects your IAM strategy. The bulk of our session will be on implementing industry standard best practices on Compute Engine. Then, we'll wrap up with a summary and Q&A. One other question. Since we had moved the sessions from 11:20 to noon, how many folks have already eat lunch?
All right. That's about half of you. So I've got my work cut out for me to make sure you don't fall asleep. Unless you're in that 5-minute group that went to IO 211, feel free. For the rest of you, there's one extra item on the agenda. Let's start at the beginning. In this case, it's the beginning of life for [? Louvre ?] one of the beagles, who belongs to an engineer who brings you Compute Engine. In this photo, he's a few weeks old. One of the things that struck me when I joined the team is how many people owned dogs. I didn't think about this. And I don't want to get into the dogs versus cats debate. I know that's a heated issue for some folks, but we have so many dog owners in the Compute Engine and IAM teams, they can guide us through this session. So we'll start at the beginning with cloud IAM concepts. IAM is about who can do what on which resource. Here are the different who's. There are four types of identities that you'll be using in Google Cloud.
The first is a Google account. This could be someone's Gmail account. It could be an account as part of your G Suite domain. These are accounts that belong to human beings. They can call APIs. They can log in interactively to the console. The second type of identity is a service account. And this is the identity that your code will be running as on Compute Engine. So for example, if you're using Compute Engine and BigQuery to do data analysis, you could grant your code, or the instance it runs on, a service account. And grant that service account permission to the BigQuery data that you need. It's important to note service accounts cannot log in interactively to the console. We still have a ways before the robots take over. The third identity is a G Suite Domain. This represents everyone in your G Suite Domain and is a way of sharing your resource very broadly. For example, if you wanted to share a particular image with everyone in your organization, you can grant a role to the G Suite Domain.
And the fourth, and the one we'll be using most commonly for IAM policies, is a Google group. A group can have any of these four– a Google account, service account, or a G Suite Domain, or another group inside of it. So what happens now when someone calls an API? Let's say they're calling the instances get method on compute. Well, in some cases, Compute Engine will ask one-to-one with IAM. Compute instances get is the method. Compute instance get permission is what we check for with IAM. Other times, it's one-to-many. If you try to create an instance, we'll check, do you have permission to create the instance? Do you have permission to use the network that you're trying to attach the instance to? Do you have permission to use the image that you specified? So sometimes, one-to-one. Sometimes, one-to-many. Here's the what. You can give these identities permissions through IAM roles. Now, roles are just sets of permissions. For example, if you wanted to allow developers in your organization to create instances, and manage them, and delete them, you could grant them the instance admin role.
It includes permissions to start, stop, create, delete, and manage instances. You can grant these roles to an identity, like a group, on a resource such as a project. There are three types of IAM roles. And the one to use depends on your organization's needs. First, a little bit of history. When we started with Compute Engine, we started with three broad primitive roles– owner, editor, and viewer. Now, these gave broad access across services. An editor could create an instance on Compute Engine. It could also create a Cloud SQL instance. Last year at Next, we announced that we had IAM roles to provide more granular permission. Today, we have over 70 predefined IAM roles. You can use these to grant an employee access to specific parts of Google Cloud. For example, if you wanted to allow your networking and security team to manage all the networks, you could grant them the compute network admin role. That would not allow them to spin up instances, but they could do network management, create new networks, subnets, and routes.
And the third, as we announced yesterday, is custom roles, which we have available in alpha. This allows you to use which permission you want to give someone. If you only want them to call three particular methods, you could create that role. You could create them from scratch– not recommended. The best practice would be to create them from an existing role. So if you have, for example, an entitlement system already on premises, think about how you want that employee's access to map to Google Cloud, which methods you want them to be able to call. Find the predefined role that most closely matches that, and then add or remove to get your custom role. For example, if you wanted to allow developers to create instances, but not modify the network perimeter by assigning an external IP address, you could remove that permission. And the final concept around IAM is that of resource hierarchy. IAM is about who can do what on which resource. Here are the resources. At the top, it's your organization node, which is tied to your domain.
And below that you can create a folder for each division or for each team. And within those folders, you could create folders for product or a project for a particular service. And then within that, specific resources, like instances, or subnets, or service accounts. The project provides the isolation between different teams and different services in your organization. So think of putting each individual micro-service down at the bottom of this resource hierarchy. We'll talk about that more a little bit later. OK. So there's the summary of IAM concept. Now, we're off and running. This is [? Louvre ?] again. He's a little bit older, a little bit wiser. He knows a little bit about Cloud IAM at this point. Now we'll go into Compute Engine, and a few specific ways that it works, and how it affects your IAM strategy. First, if you're thinking about security, you're thinking about, what are the bad outcomes that are possible? Not necessarily fun to think about, but important to think about.
And whenever you have a machine, whether it's a machine on premises, a virtual machine, or in the cloud, what would happen if someone unauthorized gained access to it? So you want to provide mechanisms such that you can limit what an instance can access as a way of reducing risk. Now, the first such mechanism are OAuth access scopes. This was in the pre-IAM world, where we had these three primitive roles– owner, editor, and viewer. And each project had one service account that all instances ran as. Well, if you have this one account that has rather broad privilege, you want some way to be able to limit your particular instance or instance group to just what you need. And access scopes were a way to do that. They're relatively broad in their granularity. For example, you could give someone permission to just the compute API or the read-only methods in the compute API. Now, there was one special scope, the Cloud platform scope, which allowed access to all of Google Cloud APIs. We'll talk about that one more a little bit later.
Now with IAM, you could use IAM roles as the mechanism to limit what your instance can access through an API. We have over 70 such roles. It offers a lot more flexibility and granularity than scopes. And instances today can each run as their own service account. So each component of your app could have its own identity and its own level of access to the resources in your project or in your organization. And these roles, as we saw earlier, can be much more fine-grained than these broader access scopes. One of the common operations when someone needs to create an instance, you want to go in and see, what do you have? Maybe you're going in to set something up for the first time or to troubleshoot an issue. We wanted to make that easy in Compute Engine. So if you go to our web console and you click the SSH button, magic happens and in you are. So what do we actually do? And what's the impact on IAM? When you click on this SSH button, we generate a new key pair. And we'll send the public key into the guest using our metadata service.
That's a key value pair data store. Then we use the corresponding private key to start your session. The reason why I bring this up is that identity and access management is not just about IAM roles. Let's think about, what are all of the identities, including SSH keys, and what can they access in the guest? So there are two options there. First is automatic key management. If people in your organization will only need SSH using either the Google Web Console or using the gcloud command line tool, you don't have to do anything. We'll create the key pairs. If necessary, we'll create a new one if they've expired. SSH just works. On the other hand, a lot of you have existing SSH key management on premises. So if you want to continue to manually manage those SSH keys, you could do that by setting them on the SSH keys metadata field, and that would put the public keys into your instances. If you want to allow all of your developers to SSH into all the VMs in your project, you could set the public key once on the project metadata and it gets inherited by all instances.
On the other hand, if you wanted to allow a contractor or a vendor access to a specific instance to troubleshoot an issue, you could put their public key on the instance metadata. So there's two levels. There's a resource hierarchy there as well. I should note that SSH is a particularly powerful permission to give someone. First, anyone who can SSH in by default is also part of the [INAUDIBLE] group. But secondly, they can also run commands as that instance's service account. So there's a way for someone to get access to resources that you may not have give them directly. And I'll show you what we did about that. We treat a service account both as an identity and as a resource. It's an identity in the sense that if your code needs to call BigQuery, you could grant an IAM role to your instance's service account to access BigQuery. It's a resource in the sense that you can grant users or groups permission to act as that service account. And we wanted this to be an explicit decision that you make.
We don't want there to be any elevation of privilege. We want it to be an explicit choice. So here's what this could look like. Suppose we had a project where a user is an instance admin. They can create and delete instances. This is the sign for creating and deleting instances, by the way. And then, you give the service account access to BigQuery so it can run queries, create saved queries, write new data. If we didn't treat the service account as a resource, here's what could happen. You could have the user trying to get a data set from BigQuery. BigQuery would check with Cloud IAM. IAM would check against the IAM policy. It would actually check against all policies in the hierarchy. IAM would rightly say no and BigQuery would rightly say no. So far, so good. Now, suppose that this instance admin creates an instance and Compute Engine checks with IAM. Oh, you have permission to create instances. You have permission to create disks. Here's your instance running as a service account.
And now that user could SSH in and use the service account's credential to access that data in BigQuery that we didn't want them to have access to. That's not what we want. Instead, we want it to be an explicit choice. So if someone does not have permission to act as that service account which you've given them, then this would happen. BigQuery, again, will say no, as it should. But when they ask Compute Engine to create an instance that runs at that service account, we will also check, do you have permission to act as the service account? IAM will say no. We'll say no. No elevation of privilege. We wanted to avoid surprises. If you want someone to act as another identity, it's an explicit choice that you make. Service account actor role is the predefined role that grants this "act as" permission. And Compute Engine checks for this permission in two cases. First is if the API would cause different code to run inside the guest. For example, if you're setting a startup script or a shutdown script.
We will check you have permission not only to set that metadata, but you have permission to act as that service account. And similarly, if you're going to change the service account that the instance runs as, do you have permission to act as that new service account? All right. That's the background. We've covered the IAM concepts, a little bit about Compute Engine. Now we're onto identity best practices. Identity here embodied by Milo, who is a chihuahua-rat terrier mix, or a spider at Google scale depending on who you believe. The first best practice is to reuse your existing identities. How many of you have an on-premises LDAP server right now? OK. That's about 3/4 of you. If you have an on-premises LDAP server, like Active Directory, you can install the Google Cloud Directory Sync tool, a free download, to sync one way users and groups from your LDAP server into G Suite. So you can continue managing users and groups the way you always have been and now be able to grant them permission to cloud resources.
The second best practice around identity is to not store keys or secrets in your code. You've probably heard news stories about people scanning GitHub looking for API keys, or heard about the Truffle Hog project to help you find them. We wanted to make it easy to make sure that you don't have to manage keys or access tokens in most cases. If your code is running on Compute Engine and you need to call another Google API, like BigQuery, just install the client library for the language that you need. Python client library, for example, and things just work. You don't have to get an access token. You don't have to create a key pair for the service account. You just call the API and it works. If you're doing local development, you can download the same client library to your laptop. You can create a new key pair for that service account. And then set the Google application credentials environment variable to the path to that key. You don't have to put it into your code, and then the calls just work.
We want it to be easy that you can not have to manage tokens and keys. The next identity best practice– because again, identity is more than just about IAM roles– is requiring more secure authentication. You have heard the identity as a service talk. And in there, a lot of folks mentioned how they were already using two-factor auth, which is awesome. And we also announced security keys, a more phishing-resistant token that you can use to identify a particular user. And you can enable this on your G Suite Domain just by going into the Admin Console and clicking the radio button. The second set of identities to think about then are people who can SSH in to your instances. First, we recommend disabling password authentication because key pairs are a more secure way. And secondly, disable root login. So if someone has to run an elevated command, you have a better chance at finding that in the auth log afterwards. Compute Engine images already do these. There are a lot of other best practices around image management that I couldn't do justice to here.
So there's a doc for more information. That brings us to least privilege. [? Dico ?] is a bullmastiff and boxer who embodies least privilege. Good luck getting through. There's two chairs and there's [? Dico ?] in the middle. So the first best practice around this on Compute Engine is obviously granting predefined or custom roles– usually. Depending on how you've set up your resource hierarchy, there may be times where you could give someone like your networking team the owner role on a project that only has networking resources. We'll talk about that more in a few minutes. There are some operations that are particularly powerful that you want to be aware of. The first is setting an IAM policy. Someone who can set an IAM policy could grant themselves or someone else any role on that resource. So by default, the owner and the organization admins are the two roles that have that permission. The second powerful operation is acting as a service account, as we talked about a bit earlier.
Because if you can access a service account, you can now get access to data that that service account has access to. You can do things like set metadata to modify startup and shutdown scripts. And there are three roles that have that– owner, editor, and service account actor. The best practice on compute is then that you give the service account active role, that permission to act as a service account, you grant that on a particular service account rather than on the project. If you grant it on the project, the group that you gave access to can now access all of the service accounts in your project, including those they may have nothing to do with. There's one exception. If you want someone to be able to set project-level metadata, if you want someone to be able to set a common startup script for all instances in your project, give them the service account actor role on the project. And the reason for that is that if you set project metadata, we don't want to just check, do you have permission to all the existing instances.
Although, that startup script will affect all existing instances. They'll also affect all instances you create in the future. And that's why we want to check for that permission at the project level and not just on each individual instance. Anytime we talk about best practices around IAM, we're sort of duty bound to mention use groups, not users. I won't belabor this point. I will say one thing if you're managing groups in G Suite, not on your on-premises server, then that's in the Admin Console, admin.google.com. Separate from the IAM policies you'll be setting in the Cloud console. That's all I'm going to say about groups. So I've talked a little bit about least privilege to users. Also, least privilege is necessary for applications. Suppose that you have a content streaming company. And you want to both stream content and also do some data analytics. Maybe understand which next piece of content to recommend, which ad to serve up, how to increase conversion.
The best practice here is that you give each instance or group of instances their own service account. In this case, the content streaming instances have one identity, the content stream or service account. And the instance you're using for data analysis have the data analyzer service account. You could grant each of those service accounts just the IAM roles on just the resources that they each need to do their jobs. But then, give each of these instances a very permissive scope. Remember, we talked about scopes and IAM roles earlier. It seems a little odd that we would say in a least privilege section of a session grant very permissive scopes. Here's why that's going to be OK. When your instance is making an API request, there are two mechanisms of authorization that it has to pass. So when this VM is querying BigQuery, we automatically– because you're running inside of Compute Engine, we automatically get the access token. That access token includes both the identity of the service account as well as a set of access scopes.
What BigQuery will then do is first check, is this set of scopes something that allows this API to be called? And if so, then go and ask Cloud IAM, do you have– does this service account have access to this resource? Because you have Cloud IAM as the definitive authorization store for Google Cloud platform, you could leave the scopes wide open. That makes it easier to configure your instances, but doesn't leave you in any more of a risky state. Because even if someone has wide open scopes, IAM is still the definitive answer as to whether they can access a resource. I'd also be remiss if I didn't mention key rotation. And there are two sets of keys that you'd be interested in here. One is keys for the service account. For example, if you're querying a service outside of Google Cloud, you need to export your own service keys. You can do key rotation by creating a new key with the create API, replace the old one with a new one, and then delete the old one. Public keys for SSH are treated a little differently in that instance metadata is written in a read/modify/write paradigm.
So add the new key, set the metadata, do the key replacement, and then remove the key by setting metadata again. For similar reasons that you don't want to store secrets or keys in code, so, too, in instance metadata. The purpose of the metadata service is to provide configuration information between the Google cloud APIs and your instances. So things like startup scripts, shutdown scripts, public SSH keys. If this metadata is treated as part of the instance resource. So anyone who can get an instance will be able to see all of the metadata on that instance. So if, for example, your instances use a private token to access a service that you're managing yourself, store that token or that password or that key in Cloud Storage and then give access to the instance's service account to read the information. That way, anyone who lists the instance won't see it, but your instance will still have access to it. That brings us to the third section, centralized control. Embodied here by Kira, a Klee Kai who has centralized control of this couch and also part of the fireplace.
The first best practice around centralized control is to bring your organization's existing structure to the cloud. If you have, for example, a lot of developers who are allowed to create cloud projects, what happens if one of those developers leaves the company and those resources are still going? With the organization node, anyone who creates a project, if their email address is part of your domain, that's automatically part of the organization node. And therefore, your organization admins can set policies to allow a particular group of individuals to gain access to them in case, for example, the project owner leaves. You could also then take advantage of organization policies that my colleagues Ray and Ray discussed yesterday in IO 211 to set the ground rules for what people are allowed to do within the organization, even if they have relative autonomy within their project. The best practices here are first, use the org node for centralized control. Map your company's domain to your organization node, and then create folders for divisions or teams.
Projects for each service, for each environment. So you have one project for a team's dev environment, test environment, and prod environment. That way, you can give stricter control over that prod environment while allowing developers more freedom in dev and in test. A lot of organizations have standardized IT blessed images and want to make it easier for you to share those images with people in your organization so it's easy for them to create the instance that you want them to. The best practices here are first, start with Google-provided images as your base. We make optimizations in these images to run them on Compute Engine, and include already a lot of the best practices around image management, some of which I alluded to earlier. The second best practice is to publish your approved images to a shared project and make those available to the developers that need it. I'll show you an example in a moment. And the third best practice to be mindful of is when you're testing these images before they have been approved, do that testing in a separate project, not in the shared images project.
You don't want someone getting a sneak peek at the next round of images before it's ready. So here's what this could look like. Suppose you want to give your data scientists permission to create, update, and delete instances, but you want them to use your blessed images, the ones created by your IT staff. So you could first create a data analysis project, give the data scientist group the instance admin role in that project. Now they can create and delete instances. And then, have a separate shared images project where your IT staff has the instance admin role, so they can create instances, delete them, create images and delete them. And then give the image user role to the data scientist group. So now as your IT staff publishes approved images, the data scientists just get access to them. They don't need to create their own copy of them. They don't have to download them into their own project. Now suddenly, you have this proliferation. They can just use them directly from the shared images project.
Similarly for controlling networks, a lot of organizations have a security team that wants to have control over all network traffic. You want to make sure that you're routing traffic through a particular set of firewall rules, or proxies. So we make that easy in two ways. One, if you're already a Compute Engine customer before we announced cross-project networking during the VPC session here, you may have a number of different projects that already have networks or subnets in them. You could give developers the instance admin role on that project, so they can't monkey around with the firewalls, the networking, and give your network and security team the network admin and security admin roles on those projects. If you're starting something new, you have an even more flexible option available with cross-project networking. And the recommendation there is that you create one XPN host per network that you're sharing, and then grant the networking team the owner role on that project.
This is one of those exceptions where you don't have to grant them a very narrow role because that project is only for networking resources. And the networking team can grant access to other groups on specific subnets. So this means that the data scientists can have easy communication between all of their instances while still maintaining isolation from other services and other teams. So your policies might look like this. In this case, we granted the network user role to the data scientist just on a specific subnet. And the last best practice, this is like the opposite of camping. You very much want to leave a trace. And even during this conference, we got a reminder as to why. When Columbia Sportswear, according to "The Seattle Times" sued a former employee, who allegedly, on their last day of work, had created a new user account, granted broad access to that user account, and then used it after they left the company to access internal information. The best practices in this case on compute– and some of these apply to Google Cloud more broadly– first is retain audit logs in accordance with your business's risks and mitigation strategies.
The length of time that you would maintain these logs depends on what risks you're trying to protect against. And also, how many resources you have to go and look into all this data afterwards. In Google Cloud, you can set up logs to be exported to Cloud Storage or to BigQuery for easier data analysis. You can also have been them set up to be sent via Pub/Subs. You can respond in near real time if something suspicious happens, like someone setting an IAM policy that you don't like. There are other organizations that need to also manage and monitor what happens inside the guest. For example, if you need to monder the SSHD logs to see who's trying to access your instances, you could set up the Stackdriver logging agent to forward those events from all of your instances to a central place, and then you can look for trends in who's trying to get into your instances there. All right. Now is the demo section. Before we switch over, the demo, unlike most sections, is embodied by two of our team's dogs, not one.
First is [? Kopi. ?] [? Kopi's ?] a poodle, very playful. We're going to go play around now in Compute Engine. That is a relatively optimistic view of the demo. And Noodles embodies another view, depending on how the demo goes. Also a chihuahua-terrier mix. All right. Can we switch over to the demo, please? OK. So what we're looking at here is an organization that I created. And we have a few different departments. We have data science, engineering, and shared resources. These are all folders in the resource hierarchy. And we have here the data science dev project. We can examine the IAM policy there. What you'll see is that we granted the compute instance admin role to the data scientist group. So they'll be able to create and manage instances here. We also want those same data scientists to have access to the shared images. So if we look at the images project, you'll see that we granted the compute image user role to that same group. And additionally, because we want to maintain more control over our networking and our firewall rules, we give them access to a shared network in the XPN dev project.
And here they are, that same group. Finally, because they're data scientists, and data is one of their favorite four-letter words, we're also granting them the BigQuery data viewer role and the BigQuery user role on our org data project. So they can access all of this data. I'll show you one other resource. Since we talked about controlling permissions to APIs that your instances are calling, let's look at the data science dev project. And we'll check the service accounts there. There are two service accounts. One that Compute Engine creates by default, and then I created this data analyzer service account. And this as well you can set permissions on. And you can see that data scientists have the service account actor role just on that service account, not on any other in that project. So this now allows them to create instances that run as this service account to query that BigQuery data. OK. Now, I'm going to login as one of the data scientists. It's a very secretive company, so her last name is redacted.
All right. Now, we are looking at this from Alice's point of view, who's a member of the data scientist group. So she has access to the data science dev project. She could create instances there. Let's see if she just tried to snoop around a bit. So you can see that in this case, Alice and no data scientists has access to view the IAM policies for this project, but they do have access to some of the other resources, like Compute Engine. Let's go and create an instance. We'll put it here in US Central. So the first thing is, which image are we going to use? There are the Google-provided images. There's a variety of OSes there, including Windows as well. We're going to go over to Custom Images. And we can choose to look at custom images in these projects. There's none in data science dev. But if you look at the images project, there we have our custom Ubuntu V2 image. That's the latest one that's been approved, so we'll use that one. Now we have our choice of service accounts.
We'll choose the Data Analyzer service account. And note that when you choose one other than the default, we set the Cloud platform scope for you automatically. Which means you just need to think about the IAM roles and not about access scopes to make sure that the instance and your code has access to the APIs you need. And for the last piece, we will use a shared network. You see we have the XPN dev network that we've shared subnets from. And we can connect to that. So now we'll create our instance using shared image, shared subnet, and using only the service account that Alice and the other data scientists have access to. All right. So now, let's SSH in there. And you'll see we're going to transfer the SSH key. This says generating a new key pair, sending the public key into the instance, and then using the private key to setup the session. All right. Now, just to show you how easy it is if you're using the client libraries to access the data that you need in Google Cloud from within a virtual machine, I'm going to install the client library for BigQuery.
OK. So here is the quick Python script. We import the client library, instantiate the client. And then we just call, in this case, the list data sets method. And it will also list all the tables within each data set. Important to note what's not done here is specifying a credential, downloading a key, querying for an access token. And there are the logs from the org data project, the one that this service account has been granted access to. And the different tables in those logs. These are all audit logs from today. So now let me switch over. We'll go to BigQuery. Note, by the way, that when I switched over to the org data project, because we had started in the Compute Engine view, the console shows the same view in the new project. And Alice, rightly, does not have access to view instances or do any instance-level operations in this project. Now, let's go over to BigQuery. OK. And now, I'm just going to show what's some of the activity that happened in this project today.
And you can see we're looking at this by the newest first. And so a bunch of the operation that we've been doing throughout the demo you can see here, Alice is the one who's been running them. And then, here are the different methods that have been called. Creating instances, setting metadata, to set those public keys. All right. Can we switch back to the slides, please? It turns out, we didn't need Noodles after all. Nothing crashed. So in summary, we started with the IAM concepts– review for some of you, new for others. And then, talked about how specific parts of Compute Engine work and how it affects your IAM strategy. The relationship between scopes and IAM roles. SSH keys and the metadata server. And the service account actor and why it's important that that's an explicit decision and not something you want to be surprised by. Then, I shared best practices in these three areas around identity, least privilege, and centralized control. These were the best practices around identity and how to implement those on Compute Engine.
Some of these are general for Google Cloud overall, such as syncing your identities. Similarly, the best prices that I showed for least privilege. I should have had a picture of [? Dico ?] on here, too. But there's no space. That dog is huge and there's a lot of best practices. And then, best practices around centralized control. In particular, around using the organization node and resource hierarchy. Here are some of the capabilities that I talked about and links to read more about each of those. Things like IAM roles, service accounts, Google Cloud directory sync. Just going to wait because I see there was still some cameras up. And then, additional resources. We have an extensive document on best practices for enterprise organizations. As well as on best practices around image management and securing your images. There were three other sessions at Next this year that are related to this area. All of them have already happened, but you can find them on YouTube. They're already on YouTube.
One of those is BP203, Identity as a Service. IO211, Gaining Full Control. And then, Virtual Private Clouds, IO401. And with that, hopefully we can all rest as easily as Milo here, because we know the best practices for securing your workloads on Compute Engine in Google Cloud. [MUSIC PLAYING]
How do you maintain control while taking advantage of the power and scale of Compute Engine? Eric Bahna shares best practices for mapping common organizational structures into Compute Engine using IAM roles, service accounts, and more.
Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Infrastructure & Operations here: https://goo.gl/k2LOYG