Google Cloud NEXT '17 - News and Updates

Container management and deployment: from development to production (Google Cloud Next ’17)

NEXT '17
Container management and deployment: from development to production (Google Cloud Next ’17)
5 (100%) 1 vote
(Video Transcript)
[MUSIC PLAYING] KELSEY HIGHTOWER: You guys know who I am, my name is Kelsey Hightower, I work at Google. I specialize in Kubernetes, Google Container Engine. And this talk, we're going to talk about some of the moving pieces when most people think about the container, or the container development cloud native workflow. So I'm not going to give you a very prescriptive, end-to-end deployment mythology, but I am going to talk about the workflows, especially the ones that I use, and I've seen some people use in the wild. I'm also going to incorporate some of the new things you've heard about. So Google Container Builder. How many people have heard of the Google Container Builder? How many people are good at using it? Me either. So this is, we're in for some excitement. I just finished these demos like seven minutes ago, so I hope they work well. I'm going to need help, so anything that I do, any demo that works, just kept loudly. Are you guys in for that? All right, so the agenda today, we're going to talk about cloud native applications and what they mean and what I think about them, right.

So I know a lot of people are trying to take their existing applications, some people call them legacy applications, and move them into a cloud. Is that a cloud native application? It's just your app running in the cloud. There's a difference. OK so we're going to try to talk about the things and properties that make an app called native. And I don't think there's a huge gap, but there's some work you got to do. And we're going to talk about building container images. So Google Container Builder attempts to allow you to build images without them running on your laptop. Now my workflow, I typically don't involve containers at all. I do all the development on my local laptop. I just use local tools, I run post-scripts on my laptop, or my SQL on my laptop, and I only use containers for the last mile. For packaging and deployment. So I don't really get into those workflows, so now I actually have a different tool I can use. And then the last one is that I hear a lot is, how do I manage multiple environments?

How many of you are running Kubernetes in production? Just raise your hand like you're lying, looks really cool for the camera. How many people are sharing that cluster for development? Ooh! Man, you guys are going to be on the news. I'm looking for that post mortem. You know it's coming. We're going to talk about some patterns that I've seen for doing Kubernetes responsively. OK. Now this isn't the only way of doing things, but something for you to think about. So the first thing to talk about is a cloud native application. How many people have built one of these? These are the same people running production and DEV in the same cluster. I'm worried about this whole cloud native thing. So here's what I think about cloud native, pretty straightforward to me. It builds on top of the whole 12 factor thing. So if you've heard of the 12 factor manifesto, I boil that down into writing applications that the couple themselves from the environment. OK. So that made it really easy for people to adopt.

Paas platforms, like App Engine, move around and adopt new platforms that come out. So a lot of people that have already bitten off the 12 factor pattern, found it really easy to adopt something like Docker or Kubernetes, because their application was already self-contained and did a few things that let them adopt a new platform. Now the other one is scale horizontally. So this is one where I think when people enter the container world, especially something like Kubernetes for the first time, they're trying to build their applications the same way they did on the virtual machine, and that's just not how Kubernetes is designed, right. The idea here is that you will specify your workload in these units that could scale horizontally. And then honestly, we want the applications to be much smarter. How many people are still writing Nagios scripts? That's my OPS people. You hate your developers, don't you? I'm not writing any more scripts, like build it into the app. So I think cloud native applications will expose health checks.

They'll do things like application tracing. They'll have all these metrics built in. And then it needs to be operable, and we'll see what that means in a second here. Now the other caveat is that this isn't just for micro services. A lot of people are like, oh we're going to adopt micro services and Kubernetes and cloud native at the same time. That just doesn't work out well. If you have a monolith application, some of these patterns still apply, and we're going to check out one of my example applications in a moment here. So I'm going to do a demo of this application. So this is going to be a little GIF builder application, I'm going to show you a little GIF that we have. So Google has all these little doodles. Here's one of my favorite doodles from Google. So I'm going to open this with my browser. Maybe you've seen this doodle before, maybe not. So this guy is serious business. OK this is from this fruit Olympics. And there's like 1,000 images here, and we need to remix this a little bit.

So I want to make this a little bit easier. So what I did here was export all these individual images, and I'm going to send it to my own little service here. So first thing I'm going to do is look at all the images that are built from this particular thing. Here's all the individual images that it takes to make that particular thing. I'm just going to take a few of them and send it to my own service. Now when I look at the design of my service, I'm doing a couple of things. One, I want to make sure that my application can run anywhere in the globe. Even startups, or even small companies now, have this notion from their customers that their app should run anywhere and scale across multiple regions. Now this is super hard, especially if you're trying to run with data stores yourself. So for this demo, I'm using a cloud native database, we call it Spanner. And Spanner's going to let me just off lift all of the thinking about distributed transactions, all the SQL stuff to Spanner itself.

So I'll be using that in this demo. And I'm also going to avoid POSIX. If you're a system administrator, POSIX and Kubernetes don't go together. I see people try it all the time. They take things like NFS and try to run it throughout a Kubernetes cluster. How many people have dealt with NFS and Kubernetes? Cap if you like it. No claps. It's hard. So in this case, I'm going to use an object store. So when I process these images, I want to store them on Google Cloud Storage. So here we're going to start our application. So this little command, I'm going to build my app. Do this one on port8080. I'm going to define a DEV bucket, a Spanner database. How many people have played with Spanner already? Oh, awesome, three people. We need to have more than that. So here we're going to run our app. So the first thing we want to do is build it. So go build. So this is all live. I'm hoping it works. Please work. Let's actually install it, so I can be in my path.

Remember if anything works, you guys go crazy. All right so we're going to run this. All right so we're listening. So the first thing we're going to do is just use curl to post images to that running a local host. And we'll see what the results look like, 8080. So if this works from a laptop, we'll be writing into Spanner, and then we'll actually be storing images on cloud storage. Buddy, I don't know. I don't know if you improved my odds. Oh, wait, wait, wait, wait, wait, wait, wait, wait, don't be clapping until it works. Let's see, did we remix it? [APPLAUSE] OK, so this is our cloud native application now. So this thing needs the cloud, obviously, to work. And once you have internet access, we can actually deploy our stuff. So let's look at where all this stuff is stored, OK. So let's poke around here and see what our application is doing, to make sure we can replicate it when we deploy it to Kubernetes. So the first thing we want to look at here is Spanner.

So what I'm doing inside of Spanner, I'm giving it a set of credentials, and all the libraries to access Spanner just assume that you have your service account in path, and I'll show you how to deploy that in a tool like Kubernetes. So here, we'll click around. And I've been writing events to our Dev cluster. And we come over here and we look at our events. Somebody's like, man, I just want to talk about Spanner. I know, Right, so here we'll make sure that we see our events inside. OK. So we'll run the query. All right, so we see our events are being writed in here, and I'm just dropping in the path to our actual image that I was just sharing. And then I'm storing all the images inside of Google Cloud storage, so now my app is free to move around. So now we have this cloud native application, we need to move on to talk about packaging it. Now the other thing that I do is, it took a while to run that, right? And we talked about built in monitoring for your application.

So one thing I like to do is use tracing regardless, and annotate and trace everything I can to know what's going on in my app. So this is a stacked driver trace. I'm just going to click on one of these, one of my trace's that I had before. And what you can start to see here is I import a few lines of code. And now I can see the span of time that's taking from my actual request. You can see here about 500 some odd milliseconds was uploading those images to Google Cloud Storage so I can do the processing. And then logging the event to Spanner, so all the round trips and formatting the particular thing that needed to go into Spanner. So deploying an application like this out of the box, it makes it really easy for me to figure out what's going on without digging into the source code, all right. So this is what I mean by cloud native, and there's other ways you can do this, but I think having these properties out of a box is the things that take us from 12 factor and move a step forward can run in any environment with this kind of visibility.

All right, so let's move onto the next step. So that's your cloud native thing. So now let's get to this container builder. So we have our app. And the price of admission for these platforms is it needs to be in a container image. And I use to keep Docker for Mac on my laptop just so I can build these images and push them to a registry. But really what I want is just some service or something else that can do it. Really when I check in my code, I want it to happen. How many people are building containers on their laptop. Keep your hand up if you're pushing to production from that same image. Whoo! That person is dangerous. Pushing content images from your laptop, it works. I don't recommend it. So we have something like Google Cloud Builder. So the goal here is that it's going to be a fully managed service, it has a rest API. The idea here is that you can connect to it from other tools, other CI systems. And then here it also support things like build triggers, tags, or other custom build steps.

And the idea here is that it has full integration into Google Container registry. Now this is the same service that we use for App Engine Flex, cloud functions. Those services use this underneath the covers, and you'll also have access to it. And you also can use your own custom scripts, and we'll walk through what that looks like here. So let's jump over to the container builder. So now that we have this binary up and running, we can verify that it works. We want to build it now. OK so the first thing you're going to need is one of these Cloud build like EML files. So what we see here is basically a list of containers. So you have a list of steps, just like any other build system. And the first thing we want to do is specify the container we want to use for this particular step. So in this case, I'm just going to use one of the default images from the Cloud Builder. Team. So this has go link and it, has the latest version of go, go1.8 is in there. So all I have to do is specify the container I want to use.

That's going to mount my source code into a workspace directory and then run the commands that I specify. So here I'm just going to say go install, which will then produce a binary that will be available for me in the workspace directory for the next step. So now we kind of get this pipeline thing going. And the next step is I'm just going to use a Docker container. And what I want to do here is just call Docker build. Now what this will assume here is that I have a Docker build file in my workspace. So if I look at this Docker build file, you see here that I'm adding the binary that comes out, so it's out of my go path. There'll be this GIF maker binary, I'm just copying it, and that's the only thing there. And I'm just using Alpine as a base image. Some people will say, hey why are you not using the scratch image? Normally when I'm deploying things like to production, sometimes I do need to get in there and troubleshoot. And not having some kind of base image there the has some utilities or the ability to add utilities, makes it a little bit harder to troubleshoot, especially if you're in production.

So I usually use a very small, minimal image like this. So given that if we were to push this, we should create this particular image from our source code. So we'll run that command now. Now I'm going to cheat and rely on my history to know what command that is. So what we want to do here is take that config, take our current context, and push it up to container builder. I don't want to know your data plan is going to be but, it's going to be a little expensive. All right so what we're doing now is tying up all the content, so we're pushing it to container builder. And this is actually a pretty good idea because now that we're not pulling down all those Docker containers on my laptop, because your data plan would be just totally screwed if I did that, we're doing all this on the other side, on the server side. So all the logs here are available as Stackdriver logging. So what you have here now is all the steps are coming out here. And if everything works, what you'll see is we'll have our resulting image pushed to container registry automatically.

Now the tool is smart enough to use your service credentials or anything that you have set up in IM so it pushes to the right place, and once it's there, it's available for me to use. Now if I were to pop over to the container builder UI, which is part of the container registry UI, we click over here. And we also have all your build history, so here you can kind of drill in to all of your logs to see what happened. You can go into each individual step if you want to. And this is where you can kind of log and see things that are working. And now that we have this particular image available, we can look at our container registry and we'll see that the image should be there. So let's find it here. GIF maker. And we'll see that it's tagged there. So at this point, we should be able to push our application to our development environment. So let's talk a little bit about the registry before we do. So the container registry, the goal of this, and why did we build this, why not just use the Docker hub?

Well number one, people want their content images to be accessible and make them fast. So having Google container registry means that your images are stored inside the same cloud infrastructure. It's backed by Google Cloud Storage, so all your images are replicated, you don't have to worry about those things, and it's private by default. So it has integration to IAM, so all the things you're using for the rest of your resources, we piggyback on top of those things. Some of the other things, the team is really up to date on the image format we support. So Docker v2 image format and SOCI continues to make progress, that's the open container image format, we'll have support for that too. Vulnerability scanning will be built in. So if you look at some of the command line tooling, you start to see hints of that in our UI. And the goal here is that we'll start by notifying you if there's any vulnerabilities found in your image, maybe inside of your base image.

Tools will get more advanced and start doing static analysis to tell you, hey, you're using a vulnerable library, you might want to update that and do a rebuild. And it's backed by Google Cloud storage like we saw before. Now when you're using this, you can definitely use the command line to do most things. So you can list tags for all your images, and here's where you would see any vulnerability reports that show up. So here's our first image that we pushed. You can list all the images you have there. Even though this seems like a simple thing, most container registries do not have a way for you to figure out what the tags are, you have to go to the GUI and drill down. So having this on the command line is super easy for me to figure out which I want to play with and what I want to deploy. So all this is kind of built in, all this is integrated. So so far we've talked about building a cloud native app, this idea that we want to build it, but we don't want to build things from our laptop, we want something like a build trigger.

OK, so now that I have this application, what I really want to do is set up a build trigger that will automatically build this app, but not on every check in. I actually experimented with having a container built on every check in. How many people have run out of space before? That's what that feels like. OK? So what I found works a little bit better is build the containers on tags. So when I tag something, that's when I want my build to actually happen. So here I'll go to my build triggers, and I can define one. So here's a build trigger, and the UI is pretty nice, you can go through and pick particular hosting repositories, like GitHub. And here, I want to have a tag trigger a build, and when that tag shows up matching my regular expression, I want it to run the commands in the cloudbuild.yaml. So we need to make one modification for this to actually work. So we want to, instead of hard coding the tag number there, we want to actually have it replaced with the actual tag on the repository.

So I'll just do like that whole cooking show thing and then let's take a look. So we'll grab the tag name and make that our image name and have it trigger the build that way. So we'll push this up and then we'll tag it. So here we'll just do git status. Man, look at all that other stuff. I don't know if I should check it in. How many people just do git add don't even look at it You're that guy like, hey man, who checked in their PlayStation games? Like, oh, my bad dude. I'll fix it. All right so what we'll do here is, who cares. I think we just add it all. I really want to see what that is. That looks safe, I think. All right, let's just do that. So let's just do this thing here. So what you should do always is just blindly do that. And then the worst thing is when people do this. Updating stuff. It's like, dude, how am I going to co-review that? Yeah you updated stuff, but it's not right. Just take it as is, dude. All right, so now we have this new thing up.

Ideally it should still build. So the next thing you want to do is tag it, so git tag. And then we'll call it 0.0.1 And then we'll say git push, origin, tags. What? Oh. All right, so we push this there. We should trigger a build. Please beta, be working. So the first warning is when you log in and it says this at the top. You need to think about that a little bit. So we come over here. Uh-oh. A build is triggering. Do we get some logs? I'm so happy on the inside right now, you guys don't even know. It's fast, too. So now we have the 1.0 image based on the build trigger. So now my workflow could be, build my application, test locally– you might want to answer that. Ask them if they have any internet access. Push it locally, and then once I tag the repository, I now have this automatic bill trigger that I don't have to think about. So all of this is kind of built in, so automatically in Google Container registry, so at this point, I'm ready to start consuming that image inside of my container cluster.

All right, so we have our build trigger in place, so let's move on to the next step. So this is kind of what we want out of this registry system. We want to be fully automated. And again, the design here is that you can hook up other tools to do this as well. All right, this guy's still working out, he's probably tired. So I want to talking about Google Container Engine. If you haven't used GKE, the best way to think about it is we take open-source Kubernetes. We don't modify it in any way, but we do glue in all the Google services, like the logging from Stackdriver, logging for Stackdriver, we do monitoring from Stackdriver, we pull the metrics from the cluster itself. We take care of all the upgrades and scaling the nose if you turn on that option. One of the best features I'd like is that IAM integration. So anything that you use from QCTL from your credentials, once we have the detailed R back, that will be integrated into the alpha announcement we had today with custom IM roles.

You'll be able to map that to your Kubernetes roles and be able to control access. And then it's also easy to manage multiple clusters at a time. Now the reason for this is we want you to focus on using Kubernetes and not managing it. Now having all of this integration also leads to the next thing, which is managing development environments. Now this is where it gets real confusing for most people. The first thing people do when they get a Kubernetes cluster is give everyone in the company QCTL. This guy's laughing his ass off becuase that leads to a bunch of problems. Number one by default, most people are giving out admin credentials, so people are adding things, deleting things that they shouldn't delete. So we've got to talk about a better way of sharing a cluster in development. And I do like the idea of self-service. You do not want to operate on a ticketing system to get someone to deploy something in Kubernetes. We made it far too easy to use the cluster to do that.

So if you're in OPS, the way you think about this is to provide a sandbox for testing, and playing with these configs until they get it right in production. So what we want to do is talk about how to share a cluster. Now I think name spaces is one of the best ways to share a cluster inside of Kubernetes. You can give a namespace to a whole team, you can give a namespace to individual developers and let them go wild in that little sandbox. But the key, once you issue that namespace, is you want to use quotas to limit what they can use in that particular namespace. You want to prevent mistakes, right. We're not trying to defend against malicious users but we want to make sure people don't accidentally consume too much and give direct access to this. So if you haven't used name spaces, we're going to flip over and show you how people are managing these things. So one thing people are doing is they have a git repository that holds a lot of the infrastructure stuff. If you come from a infrastructurist code background, you're kind of familiar with this particular model.

Here, this is where restoring our configfs. So what we want to do here is we'll have a dev setup. So as a cluster administrator, I want to prep these name spaces and put quotas in place. As a developer, I'm assuming my credentials that are given to me are scoped to that particular namespace, and I can just push that namespace and deal with the resources that I have. Now one thing is important you have to remember, the name space and a single cluster is shared across all the nodes. So you're not going to limit them on what nodes their workloads can land on, so you want to put that quota in place to make sure that you're sharing resources effectively. So the first thing I want to do is make sure that I'm using the right cluster. So for those that don't know kubectl, the Kubernetes command line tool, supports multiple clusters, you can just squish between them. So here we're going to use kubectl, config, use-context, and then we're going to say dev. So now I'm pointing to my dev cluster.

And here I'm going to create a namespace. There's this really annoying developer hightower, this guy, and he's just rambunctious, thinks he knows everything. So I'm going to give him his own name space, and he can go wild in that particular namespace. So as a cluster administrator, I'll say kubectl apply, and I'll create this new namespace. So once this namespace is in place, before I hand out credentials, I'm also going to put a quota in place here. All right, so this quota will allow only one service. Probably one of the most important quota things, people start creating a bunch of services, you're going to have a bunch of public IPs and you're going to be spending a lot of money at Google. That's probably a good idea, so probably up that number if you want to. We're going to limit it to three pods, and then we're going to limit the amount of CPU and other resources that they can require. So here, we're just going to do an apply here as well.

All right so now our namespace is set up. Now we're able to give our credentials to the developer. So what we want to do here is limit what we give them by default. I'm just going to switch contexts, but when you give out your configs, you can actually put them in a specific context, give them a namespace, so it just defaults to the right thing. Right, and then maybe limit their credentials on what they can switch to. So here we'll say kubectl, config, use-context. And here I'll just use the khightower context. All right, so once I'm in the khightower context, I now have my own environment and sandbox to play in. So I say kubectl get pods, hopefully there's nothing running. And then also we can look at our quota. So kubectl describe quota. And we can see the limits that are put in place. So at this point, that's all I have to work with. So I'm going to deploy that application that we had earlier. So here I'm just going to update my deployment. So in my Dev, I check in my Kubernetes configs next to my application.

If you're running things like mini cube on your laptop, it will give you a one node Kubernetes cluster. It's a good idea to test out these configfs before you push them up and push them to other higher level environments. So here, what I'm going to do is look at my Kubernetes config for my deployment object. And you'll notice a few things here for configuration. Here you can see that I'm pointing at the image that we just built from that tag. Down here, I'm having my configs separated. A lot of people struggle with this idea of where do I put my configuration files? I recommend you do not put them inside of your image. Do not build a new image to deploy to an environment that has your configs in it. Kubernetes makes it too easy for you just to have your configs parked outside. Inside of Kubernetes, in this case, we're using config maps. Here we're specifying the database we want to connect to, the bucket we want to use, and the project ID that we need to consume.

Now my application doesn't necessarily take environment variables, so I'm able to reference those config maps in my flags by using this method here where you can refer to environment variables and Kubernetes will substitute them before they're actually deployed. Now here's the other bit that you need to think about when you're managing these environments. Even in Dev and production, you want to know how your application performs with these kind of constraints. When I talk to customers, this is the first thing that they fall down on. Most people are used to having the whole VM or bare metal machine at their disposal and they've never been capped before. You take that application and you move it into an environment where we start enforcing cgroups, it starts behaving weird, right? They just say the app is slow now. Right, of course it is, we're throttling how much CPU you can use. So you need to start measuring these things, I would say even at the lower environments or even on your laptop.

So make sure you put in the limits and request in place. And here we're mounting in our config maps. Also my service account that I need to connect to all those resources, just like we saw on the command line. And the other thing that I do in my Dev environment is I also check in my configuration options. So if you look here, I'm going to connect the Dev bucket because that's the thing that I have access to. I'm going to connect to the Dev database inside of Spanner. I'm also going to use my personal project ID, right? So as a developer, I know I'm pointing to the right namespace. So what we'll do here, say kubectl apply dash f. And then I'm going to say this whole Kubernetes folder. So most people don't know, you can actually just point to a whole folder and it'll take all the configs and push them to Kubernetes at the same time, so I don't have to do them one by one. Now if I push this up, we should now have our configuration files and credentials all in place, and our deployment there.

So let's see if it's actually running. It is running, as I expect it with lots of confidence. So with this running, I can now connect to it. So beforehand, I created a service, kubectl, get svc, and then we can actually look at our quota again to see how much we're actually taking up. So that actually cost me about half of my CPU limit, so I'm not going to deploy much more here. So what happens if you do accidentally try to scale this out a little bit? So let's try it really quick, so kubectl. We're going to scale the deployment object. So I'm going to scale my deployment inside of Kubernetes. And this is going to be called GIF maker, replicas equal 3. And then what we'll do is we'll also watch for some events, so we actually see what's going on in Kubernetes. Kubectl, get events, –watch. So here's all the events that are coming out of our cluster. So we're about to kick off this command and we're going to see what happens. You want to observe the behavior.

So I'm going to ask for three of these. And as I asked for three of these, we get some errors here. Right? So we'll cut this. And we'll see that some of them fail because we actually hit our quota. So we're going to end up with this only two out of three pods. And the deployment controller behind the scenes will continue to try to create the third pod, hoping that quota will show up to allow us to do it. But in the meanwhile, we're going to continue to hit this error and we won't be able to actually use any more inside this particular namespace. If I say kubctl get pods, we'll only have two running. If I look at my quota, we can see that I'm out of quota, and at this point we can have a conversation with the administrator, or you can be monitoring the events that are coming out of Kubernetes to say, hey, we have a namespace that seems to be needing more resources. And if the cluster isn't fully utilized, you can come back here and add resources to that particular quota,or to that particular cluster.

So we can bump that up a little bit. It takes a moment for this thing to resolve, but we can go ahead and do it now. So we'll say kubectl compute resources. And we'll bump some of this up, we;ll say, hey, let's go ahead and give you four here, we'll give you eight here, and those are fine. And then we'll say kubectl apply. So this is me as a cluster administrator, I'm noticing that there needs to be something fixed. Apply-f, compute resources. So at this point, there's more quota inside of that particular namespace. We see the limits have been risen. So at this point, the deployment controller will try and try until it finds room to actually run this workload. So let's see what's happening. Kubectl get pods. All right, so this will take a moment. And then when it realizes that it will work, it will show up. So we're just going to let that continue in the background as we move on to production. So we have one more main use case that I want to show you when dealing with this, and also how container builder can help us even for the production situation.

So look at this really quick. This guy's still doing his thing. So for production environments. This is where I think you should just use a dedicated cluster. One, you don't want to upgrade Kubernetes and try out new features. And the last thing you want to do is upgrade Kubernetes because you want to use some new feature for Dev, to experiment and take down production. All right? You do not want to fill out that post mortem. There's no reason to. The idea here is that the configs you write in one environment ideally can move to the next environment, because all you have to change is the config settings between different clusters or name spaces. And we saw the resource quotas and limits. There's another thing that I want to show you that I actually don't have on a slide that I think is important, is like you want real health checks. So when we look at our production environments, and we'll go over here, we want to make sure that you're actually using things like health checks inside of this.

So in my case, this is going to be readiness probes and healthy probes. So if I'm doing rolling updates on my cluster, you're not going to get to zero downtime if you're not actually checking for status or readiness of your cluster. So once we have that– and then this is why I think you need to disable direct access. This is why I think kubectl probably doesn't need to point to production. Ideally you want to give that to admins, and this is where I think you hooking your CICD system. Now those that are using Kubernetes in production. Do you have it wired up to your CICD, are you doing continuous delivery with it? So not the same number of hands, and most people are confused, they're looking for products to Kubernetes integration. And I look at them in my, like what does that even mean? What would happen there? OK so what does it take to integrate Kubernetes to your CICD pipeline. So we'll use cloud builder here to try to make this work. So what I want to do is, remember we're using git for our infrastructure.

So ideally, I can actually have my production work loads was there as well. So if we look at our production configs when we look at our deployment here, ideally any changes that I make, we're going to make a change here. Actually it's at Dev right now. Maybe we were about to launch to service for the first time, so we need to move to production. And since everything is here, ideally I can just set up a build trigger to this repository as well. So I can actually do that. So I'm going to go to my build triggers and I'm going to set up one for infrastructure, but this time I don't necessarily want to tag it. Anything that gets pushed here, I want to have that automatically roll to production. So code review, push it to production. And we look at our cloud builder here, we're going to us a slightly different one. Here I'm actually going to use another container image. I'm just going to bring in the GSU tools. So to make this work, I'm going to grab a cube config from Google Cloud Storage, put it in my workspace so I can access it for my next step.

Now my next step is going to use kubectl to do an apply on the production directory. So ideally for every environment that I want, I can have a separate build trigger just to handle that piece. So if this works, ideally if I push to the master branch, it should automatically update the containers in that environment. So let's switch to production and see what we have. Kubectl, config, use-context, production. Kubectl, git pods. Right, we don't have anything running there. Hopefully, good. So let's just put a watch on this and see what happens. So what we need now is to trigger a build. Currently we have Dev in our thing. And I had a volunteer do a pull request to my infrastructure. So remove Dev image from name. Who gave me this pull request, raise your hand. Do you know you're doing? AUDIENCE: We'll see. KELSEY HIGHTOWER: OK. So it's code review time. OK. Do not just click Merge. Now I had no idea if this is going to actually work, OK. We were trying to test this out before the talk and we ran out of time.

But you got the pull request in. Do you think this is going to work? AUDIENCE: I have faith. KELSEY HIGHTOWER: All right, because they're going to talk about you on Twitter if it doesn't, it's going to be your fault, not mine. All right, so we have this commit, and ideally you just do a code review, right. So let's see what's inside of here. So you're stripping this tag and you want to use this image. Right and go verify that this image is even real, is available. So we go over here and we find, what is this thing, GIF maker 1.0. That looks, 001, looks all right. So this looks good to me. So I'm going to merge this. Now my hope is, if this works, we should trigger a build. I'm hoping this works. All right, so we're going to look at our build history. So we're going to see if this triggers a build at all. All right, so let's merge this. All right, so rebase and merge. Man. OK, that did something. And do we have a trigger? Oh, snaps. Oh, snap.

Let's see if it's actually working. So we're running through our customer Cloud Builder file for this. If it completes, we'll go through the steps, we're going to copy in a cube config, we going to do QC still apply, and if this completes, what we should see, our pods show up here. And if is the right version, after our health checks pass, those should be running. And then once they start running, we should be able to hit the public service there, so let's try that, kubectl, get svc. Oh we don't have a service. so let's just do– oh, that's going to take long, but there's a trick that we can do to make this a little bit faster. Kubectl, get pods, will grab this name, and then there's a little thing you can do Kubectl port-forward. OK. And then we'll do 8080 in my laptop to 80 inside of that container. And ideally we should be able to push images into this like we did earlier here. We'll push those up and if everything is working, we get this back, and it works.

Awesome. And with that, end of the presentation. Thank you. [APPLAUSE] [MUSIC PLAYING]

 

Read the video

There are common questions around container management and deployment. What does a development and deployment workflow look like in a containerized world? What are my artifacts? How do I build them? Where do I store them? In this video, Kelsey Hightower walks you through the end-to-end workflow for building cloud applications on Google Cloud Platform (GCP) by leveraging Google Container Engine, Container Builder, and the Google Container Registry. You’ll also have the chance to explore cloud-native applications so you can get the most out of building on Google Cloud Platform.

Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Application Development here: https://goo.gl/YFgZpl


Comments to Container management and deployment: from development to production (Google Cloud Next ’17)

  • Great talk. I learned a lot. Thanks!

    Daniel Malone March 9, 2017 3:57 am Reply
  • Great presentation, K Hightower you're awesome ! you make me fall in love with Kubernetes

    Shabazz Abdulrahman March 11, 2017 4:04 pm Reply
  • jerk, he is showing off, he thinks he is a star, but in reality a clown…. pity google employ such jerks…

    BlueTaurianBull March 26, 2017 9:04 pm Reply

Leave a Comment

Your email address will not be published. Required fields are marked *

Loading...
1Code.Blog - Your #1 Code Blog