Also if you've ever had the extreme misfortune of implementing an OAuth two server in PHP, you've probably used my library also. So to illustrate in applications lifecycle across gool cloud platform, I've built a small app that will take a face image, a base image, and then swap them. So nothing too groundbreaking here, but all this stuff is available on GitHub in case you want to run it. The architecture is a WordPress frontin, just for kicks. And the back end is a Python script that I found on GitHub. And that's been containerized into like a worker type back end. And so this is just supposed to represent a pretty standard, really basic Microsources style set up. The clicker. Keep forgetting. OK, so the first thing that we're going to do with this application is we're going to run it on Compute Engine. And some of the advantages of Compute Engine is we have this awesome Cloud Launcher product that basically has a bunch of stuff that most PHP developers want to run already set up, so that you just click a button.
You can configure a couple of things like HTTP, HTTPS firewalls. You can add PHP My Admin, that kind of thing. And we have those for WordPress, LAMP stack. We have it for Zen server also. So if you're into that kind of thing, you can run Zen server, and we handle all of the complicated licensing and stuff. And you just pay for it straight through your billing account. This is super low barrier entry stuff. I was actually going to show you guys my blog. Plug my blog, brentertainment.com because I split it up using this. It took 20 minutes. I launched a WordPress instance using Cloud Launcher, and then it was ready to go. It's pretty great. One other thing I wanted to mention is they have autoscaling instance groups. So even though it's super simple stuff, you can actually do pretty complicated scaling rules with it. And then make a pretty resilient application. So this is what you get. This is what my blog is. It's just WordPress running on MySQL in a VM. So does anyone see any problems with this if you wanted to run this at scale or something?
Does this seem like it's a sweet solution? We can all go home once we run this. So obviously running my MySQL on the same instance is not very scalable, right? If you spin up a bunch of these instances, they're all not going to be talking to the same database, and it's not great. So what I've done for this example, and this again took very little time, is I deployed my worker container onto this Compute Engine instance. And then set up WordPress to use Cloud SQL instead of MySQL database that's right there on the server. So has anybody used Cloud SQL? Is anyone familiar with this? We just announced today that we support Postgres and Cloud SQL, which is pretty cool. But it's essentially just managed MySQL. And that way you don't have to worry about setting up master-slave replication, or requisitioning a big beefy server. We take care of that stuff. All right, so I'm going to demo this real quick. And feel free to follow along on your cell phones. The mobile UI is actually really bad.
Actually the UI in general is really bad. I don't think the Next branding team is going to be super stoked if they see this. It's very simple. You give it a very happy picture of me. And then I have a load of images. Does anybody have anyone in particular they would like to see my face swapped with? This one? AUDIENCE: Yeah! BRENT SHAFFER: All right. These are all public domain images, of course, that's why I picked them. So this now calling the Pythons back end on local host. And there you go. Look at that. That looks fantastic. All right, sweet. And go back to the slides. So I'm actually running this architecture, which is just slightly different. What I've done is I've put WordPress into a container that's running on the Compute Engine instance. Alongside the worker docker container. And that way they can still communicate through local host. And what's really great about this is this actually really, really scalable. This is a great design. So I can set up autoscaling instance groups through Compute Engine to scale this based-on requests per second.
Things like that. And what else? CPU. See CPU usage. And/or you can actually use custom metrics and stuff. But this will now scale to high heavens. And if you really wanted to scale you could use spanner instead of Cloud SQL there, and then you're like globally-scalable and crazy. But the container part is a significant step, right? You can't just breeze over that. So how many people here are familiar with containers? Or have built them? I actually can't see anything with these glasses on, but I think I saw a lot of hands. That's great news. So containers are one of our favorite things here at Google. We absolutely love containers. If you want to know what to get a Googler for Christmas, get them a gift certificate to The Container Store. It's something he can go and pick out his favorite container. We love these things. We run these everywhere. So to talk more about containers, I think a good introduction is App Engine. So once again App Engine users– has anyone here used App Engine?
OK, slightly less but still a lot of people. So we announced today the GA of App Engine Flex. Which is really great. App Engine Flex is similiar to App Engine Standard, but we run like standard container format– Docker containers. So you can take pretty much any container and stick in App Engine. And you get all the advantages of App Engine, but you're running your own custom container. Which is really sweet. So what we've done– the few PHP-loving Googlers at Google– is we've spent a lot of time building this base image for you PHP applications. And this thing is available on GitHub. And it went into Beta this week or last week. So, it's great. And we would love your feedback, obviously, on it. Because PHP– as all of you guys know, there's a broad user base with tons of different types of PHP users. And we're trying to come up with a runtime that's really great for everybody. So how does this work? So when you use App Engine, it just copies over your application code, built and on top of the base PHP image that we've built.
And then it deploys the resulting container. Right? And this container is built into Google Container Registry. GCR dot IO. And if you're not familiar with that, that's really basic. It's just like Docker Hub, so it's like a repository for images. Every time you deploy a App Engine app, it gets put into this URL that represents your project ID. The version and service that you just built. So that's really cool too because you can take that image, and you could deploy it to Compute Engine or to Amazon or run it locally or to Container Engine. Which is what we're going to talk about later. So some of the advantages of using our base image is we automatically install all of your dependencies, if you're using Composer, or you have a Composer dot lock. Which you all should be doing. We support all the non-end of life versions of PHP and will continue to as new versions become available. We have good logging and error-handling integration with Google Cloud products.
We're using Nginx, which is great. Apache is also great. And we don't dislike Apache and, in fact, if there was a public outcry for Apache support it would be pretty trivial to add it. The problem is it's huge. So it adds like a significant amount of size to the resulting image, which slows down uploading and deployment and stuff like that. Nginx is also great. So we've had a lot of luck with that. And that's what we use in the container. So PHP– any configuration. Everyone's going to have some customization required. So you can enable a set of shared extensions that we've already compiled into the runtime. Or you can add whatever custom configuration that you need. And you just drop that any file in the root of your project, and we load it. And same thing goes with the web server configuration. So with Nginx, if you drop an Nginx dash app dotcom, we load that configuration and run that for the container. So some security changes, since secure is in the title of this talk.
PHP and your extensions are kept up to date. So like I said, all the supported versions of those extensions– or sorry of PHP that we have when minor releases come out, we keep that regularly up to date. So when you build on the player container you get the latest version. We also have security patches. So a little bit on this, because I think it's a really cool thing that we've done. How many of you guys have used parce, underscore, STR? Or seen that function? Nobody? Wait this can't be real. This guy over here. OK. We've got one. This is like a really common PHP function. It takes a query string formatted string and splits it apart into key value pairs in an array. So if you haven't used it, all the libraries that you're using have used it. So this function is great. But if you don't supply a second argument, instead of loading those variables into an array, it loads them into the current scope that the function is being called in as variables. So you can see why that would be a security issue, right?
If you have a dollar sign Admin equals false, and then I type dollars sign Admin equals true. In the query string and that variable is going to be overwritten. So this is like not good behavior. This is PHP from over 10 years ago, whenever people were doing terrible things with registered globals and stuff like that. So we don't want that. We don't want that at all. This is what we're trying to fight as PHP developers, all those haters. They keep saying that our language isn't secure because of stuff like this. So what we've done in our runtime, is we've patched this. So you can't do this. You'll just get an error that you need two argument, which is really good. We also have– for PHP 5.6 only. Unfortunately, just for PHP 5.6 because it's not stable for 7.0 and above. We have Suhosin security extension. And this is a really, really good security extension. Currently what it does for 5.6 is it logs potentially unsafe functions that you might call. Like [INAUDIBLE] pass through and things like that.
And they'll show up in cloud loggers. So you can audit your own application code to see how safe it is. But it does additional things too. It makes sure that your C extensions– the C calls are doing improper things with memory. Like buffer overflow and stuff like that. It's good to have that in there. We don't use HT access, which is generally considered not good practice. And we have appropriate file permissions and user permissions for the code that you deploy. So it's not executing as root, and the files aren't writable. Stuff like that. OK. Talked a lot. So the two files that you need– and this still isn't App Engine. This is just a container for your application, which is really nice. The two files that you need is PHP dot, INI. This one. And this is for whatever PHP configurations you have. For this app in particular, I'm using the GD extension to do some image manipulation. So I've added that to my INI file. I've increased the maximum amount of uploads, so that you can upload 100 faces if you want.
And it might explode, I don't know. But you can do it. We have a few other extensions that are installed for WordPress, BC Math, and Opcache. So we also have the Nginx configuration file. And this is all that's required for WordPress. And you actually don't need this if you're using our default configuration. But I assume that most people want to add this anyway. But this basically just points to the front controller. So all modern frameworks these days use front controllers. So that's essentially all this is used for. And then for WordPress you have a special rewrite for WP Admin. So if you're deploying to App Engine Flex now, you need one additional file, which is app dot yaml. And if you've used the App Engine, you've used app dot yaml before. This is basically just a configuration file that tells App Engine how to deploy your application. So in this case, we have runtime PHP to let it know it needs a build from our PHP base image. And we have Mflex to deploy to App Engine Flexible environment.
The runtime configuration thing just says, our front controllers is in the root of our project. So if you had Symfony, it would be document root web. If you had Laravel, it would be document root public. And then the rest of this isn't super important, but you can set environment variables in app.yaml. And if you're using Cloud SQL version two, you declare your connection there. And it looks like this. This is basically the same as what we have, but now we have two separate services. Instead of two services contained in an instance. So some advantages of App Engine. We have versioning with containers and URLs, which is really nice. So every time you deploy, you get a time-stamped version for the image and a URL as well. Representing that version you deployed. We have autoscaling, of course. And you have default SSL and Apps Swapp dotcom, which is nice. So let's go back to the demo. Now here's Faceswap on App Engine. Is there anyone else you guys want to see? If not I'm going to choose.
I might go with Ronald Reagan. This one's really good, because I apparently look exactly like Ronald Reagan. [LAUGHTER] Faceswapping has no effect. So if we look at App Engine in Pantheon here. Sorry. You see we have a default service, which is our WordPress container. And then a worker, which is the Python service. So I don't know how interactive this crowd is. But if you guys all want to access this on your phones and start actually face-swapping with it from images from your phones, we can actually see these scale up. And that's pretty fun. So I would love it if you guys did that. I actually haven't done this before. So I think it would be fun. The URL was gae dot face swapp, like two P's. F-A-C-E-S-W-A-P-P dot com. Then that will load this up. You can also go to cloud dash next dash PHP dot appspot dot com. It just redirects. And in the meantime, I'm going to upload a couple more. Let's see if this works. That's a terrible one. I know I should delete it.
Depending on how many people are doing this, this might not work. Is anybody actually face-swapping out there? I'm just curious. Raise your hand if you are. OK. We got one. We got a handful of people. OK. So this should definitely work. Should definitely work. When I tested this, it only took two different attempts overloading the worker. I sent in 20, and within seconds it spun up five more. So I'm hoping the same thing happens now. Come on, Yeah! OK. We got another one, sweet. All right. That's App Engines. That's basic. All right, let's go back to the slides. OK, so we're going to talk about Container Engine now. So moving an application to Container Engine, now that you've done all these other things, is hilariously easy. There's not even really a step. You could literally take the image that App Engine uses and deploy it to Container Engine. And you're ready to go. So it's really simple. One of the cool features that we have is– let's say you do have a complicated app.yaml or something.
You can generate a Docker file that represents the container that App Engine builds just with a single G Cloud command. That's gcloud, app, gen, config, dash, dash custom. And so that gets you a Docker file. I haven't really explained what Dock file is yet, so that is essentially the standard Docker container format for building your container. And I'll show you an example of that in the next slide. So after you've done that, you run a Docker command to build the Docker image. And then a G Cloud command to upload that image to Google Container Registry. Which again, is like a Docker hub. It's just a repository for all of your container images. So this is what you need for this app in particular. You need a configuration file for the two different services. Our WordPress app and our Worker app. And you need a configuration file that just– this just establishes a load balancer. So you get an IP for your service. This is what a Dockerfile looks like. This is the one I use in this app.
It's incredibly basic. All it does is say, we're using the PHP base image. The Google App Engine PHP base image, so there it is. And then your application is going to be installed in the app directory. That's what got generated. So these are taken by Kubernetes and Kubernetes is container orchestration. So in comparison with what we did in Compute Engine, what we had was just two containers running on the VM. With Kubernetes, we're now going to have a fleet of these containers that we can scale up and scale down individually. And we have a lot more control and management with them. And I'll demonstrate that later. But this is the big value add of using Container Engine. So this configuration file is actually really simple. It may look daunting, but really all we're doing is adding some arbitrary labels here. So we can have fire-grained control over these containers. We are just letting it know that we need a load balancer for our WordPress service. So there's a WordPress configuration.
You'll see we have a image path, using GCR dot IO, so it always pulls the latest image. And we let it know we're running in the container running on port 8080. And we again give it some labels so it knows how to reference this container later. And we also have three of them going on. This is our worker. It looks pretty much identical. We have a different image path, different labels for it. And we're going to use two of those. So I already showed you guys how to build. This is the same thing whenever you deploy these things. You run Kube CTL apply. And Kube CTL actually is already on your machine if you use G Cloud. And I assume everyone here uses G Cloud. So it's already there. You can just run this command, and you're going to deploy these pods to Container Engine. The architecture of this looks pretty much identical to what we had before. Now we just have Container Engine pods, instead of App Engine instances. And the difference here is pod is basically just like this conceptual– it's one level of abstraction above the Compute Engine instances that you're running on.
And this just allows you to have multiple containers running in pods, and then you can take multiple pods and run them across your instances. Your Compute Engine instances. So demo time. Once again. All right. This time I'm going to do my other favorite, which is Jack Nicholson from "The Shining." So he looks really angry, but now he's happy. Actually I think that one is probably scarier than his other face. So to highlight how powerful Kubernetes is, I'm just going to switch to the terminal real quickly. And edit the one file that I showed you up there for the worker. We can see we have two replicas right down here. Everybody see that. So I'm going to add zero, and then just run that same command that I did before. Kube CTL, apply. Now if I do get pods– Oops, I got to do it quick. They're already up. That wasn't that wasn't good, because you didn't actually see that they weren't out before. Yeah? AUDIENCE: What means [INAUDIBLE] ready, two over two?
BRENT SHAFFER: OK, good question. So this is a list of the pods. And inside ready is the containers. So two of two containers are ready. So for our Faceswap worker pod, we have just one container running. And that's the face swap script. For our WordPress worker, or our WordPress pods, we have two containers inside. One is for the actual face swap WordPress container. That we built and deployed that we talked about. And one is something that I didn't talk about, because I didn't want to confuse anybody. But it's not that confusing. So there is a Cloud SQL proxy running in another container. This Cloud SQL proxy. So if we look at this original file that I had after this container configuration, I have another container configuration that basically spins up from another container image a proxy server. So the WordPress instance is communicating with a proxy server, which communicates a Cloud SQL. And that's how Cloud SQL version two works. I don't know if anybody here that works at Google has a good explanation for why that is.
I don't actually know. But if you use something like spanner, which is something we announced recently. And it's an awesome database. You won't need to do that. You'll just connect directly to it. But that's good, because it illustrates some of the nuances of Kubernetes that you don't have to deal with if you use App Engine. If you use App Engine, you don't need to spin up a Cloud SQL proxy. App Engine environment handles that for you. Just connect to the database directly. And it also illustrates how pods contain multiple containers. And then the containers can communicate on Localhost within the ports. Which is really nice. OK, so what I was going to do. I just want to do this real quickly. Because I just love how seamless this is. So I just spun it back down to two. It happens so fast. Whoops. Not that. Lets do 30, what the heck. I haven't done that yet. I don't know if that'll work. OK, there we go. We already have most of them running. Sweet.
So for my final trick, let's just blast this thing with all of these images and see what happens. Come on, come on. OK, they're coming in. That's good. It's actually a little slower than I expected it. So I really like the acquirer currency guy up here. I think they all actually look better. I really like the statue of David. That one is nice. I like his build. It's a little better than my own. OK, so let's go back to the slides. So we talked about Google Compute Engine, which is less managed, but it's an incredible low barrier of entry. And it actually has the ability to be very scalable. In just one second, Marek is going to come up and talk about how they've built a global and very scalable application using that paradigm that I showed you with containers running inside a Compute Engine instance. We've talked about Google App Engine, which is a lot more managed, it's highly scalable. And we talked about Google Container Engine, which is maybe higher of a barrier of entry, but once you get used to it you have an insane amount of options you can deploy tons of services.
And manage them very easily. So are there any questions? Sorry. We have Q&A at the end. And we're going to do all the questions then? But is anybody like confused, before I hand it over to Marek? Or does this all make sense? OK. Sweet. We'll be back for questions in just a minute. So without further ado, come on. Please applaud Marek. He's great. [APPLAUSE] MAREK DAJNOWSKI: Hello everyone. I'm quite confused why I was invited, because we use Amazon AWS. [LAUGHTER] No, just kidding. Sure we have plenty of nice experience with Google Cloud platform. We moved to Google Cloud platform in January 2016, and it took over three months to learn and evaluate various products, Go Cloud products against our application. What I did? OK, and before I start, I'll give you a quick background about Instapage and me. I am chief architect in Instapage. And I joined Instapage six years ago. And since it's very beginning, I've seen growth from the start to the end. And Instagpage is the most powerful landing page marketing solution.
And it requires solid infrastructure. Our application seems to be scalable, but not in the right way. And speaking to my colleagues, I had the metaphor that we have a fleet of buses– 40 seats and one wheelchair space each one. And we thought we have a capacity to scale. But when we've have a spike in need of wheelchair spaces, we are not efficient. Because, for example, we have to send 10 buses for 10 people, which requires wheelchair space. And referring to our application, which the situation was exactly that. We needed something more flexible. Because if all traffic came to our monolithic app and said there were some bottlenecks. And tie up application crashed, costs downtime. And after experiments with two Kubernetes and Docker, it was clear that the orchestration is the right direction. So our system to scale, we decided to chop it and split it into small independent micro services. But gradually. First we took out an [INAUDIBLE] storage system, which you can see on the right.
And then we made landing page server system for our clients' landing pages. And as you see on this diagram, after transition, our [INAUDIBLE] was smaller. The database with smaller too. And it had to handle less traffic. And most of our traffic was directed to lightweight. Lightweight-dedicated and ultra-scaled microservices. And serving landing pages, we observed that changes in load was very viable and unpredictable. And even we had a lot of spike loads. So we needed some kind of sophisticated automated system that could scale up and scale down. So what we designed was something like a reverse proxy. And we made it and trapped into Docker to– easy to start them. Because start time was very crucial, we even created an instance template, which preloaded a Docker image to spawn them as fast as possible. And then everything we combine in back end services with out of scale turn the DOM. And all traffic is going through the HTP load browser, which I was asked us to direct traffic. Direct it zone where visitor is from.
So visitors from US will see the content donated by instances in US. From Europe in Europe. And from Asia to Asia. And that was a perfect solution for our needs. Each instance has been tested to and proved that it can handle 200 requests per second. And with minimal of five because we have found five zones. Three in the US. One in Europe, and one in Asia. We could handle 1000 requests per second. And if traffic exceeds– it's more than 150 requests per second for an instance, the system created new ones to handle the traffic. And if traffic was going down, system was taking down not needed instances. And real life traffic, which looks like sometimes like that. Some customers inform us that they do expect high load, but some do not. And even if with traffic like that and out of school system using out of scale back ends and Docker, we are able to handle this kind of traffic without human input. Descale. It could be a day. It could be a day. Normal it looks like an [INAUDIBLE] sometimes.
BRENT SHAFFER: Quick question for you Marek. MAREK DAJNOWSKI: Sure. BRENT SHAFFER: What's the upper bounds on that graph there? MAREK DAJNOWSKI: Which one? BRENT SHAFFER: The upper bounds. You have 1,000 requests per second, and then it goes up to– MAREK DAJNOWSKI: –Yes. I do predict that some people create a campaign using AdWords. And they tend to stop it for some reason. Or they are also running out. For some reason, the traffic drops, and then it comes up. If their campaign is resumed or something like that. And our system has to handle 500,000 registered users, who made 1.5 million pages. And traffic to these pages generates 20 to 50 requests per second. Just normally, when you see the smallest usage. But the high record we recorded was 12,000 requests per second, and even over that time the system was fully functional. Because it created 60 instances. That was obviously DDOS combined with [INAUDIBLE] traffic. But we had to handle it, at least for the beginning. And we have done great work, and we have more efficiency and scalability.
And much more stable platform. And we are ready to grow even more. Thank you. BRENT SHAFFER: So while you're up here, I have a quick question? MAREK DAJNOWSKI: Sure. BRENT SHAFFER: You said you have six micro services, plus WordPress front end, plus a PHP back end that is gradually being split. MAREK DAJNOWSKI: Exactly. The diagram I showed you in the beginning is like nothing what have in real. Yes we have what's called PHP application, WordPress, and for [INAUDIBLE] services Because I count– [INTERPOSING VOICES] BRENT SHAFFER: And those are all containerized, and they're running on a single Compute Engine instance. MAREK DAJNOWSKI: Exactly, yes. And thank you that you told that, because we do containerize everything. Because we want to use Kubernetes to manage our deployments. And just to manage it. BRENT SHAFFER: A follow up question then would be, of course– it seems like it's a good fit for a Kubernetes deployment. So I'm curious why you guys made the decision to use Docker and Compute Engine, as opposed to going with Container Engine?
MAREK DAJNOWSKI: Yes that was why we are containerizing everything. BRENT SHAFFER: Got you. I really like your guys' story. I think it's really cool. Because it shows where a lot of PHP applications are, right? I have personally been involved with many PHP applications that follow this trend of starting with a massive monolithic application. That was built a while ago, and it's being split off because it no longer scales well. Or just because it's very hard to manage. So you guys are in-between, where you have moved away from the monolithic PHP app stage. But you're still transitioning to the fully-orchestrated via Kubernetes slash Container Engine fleet of containers. So it's obviously a long process to move an application to that. You mentioned that you guys have four or five other microservices that you're– MAREK DAJNOWSKI: In development, yes. BRENT SHAFFER: That they are in development, and that you guys are going to deploy pretty soon. So at that point, you're looking at a very strong use-case for something like Kubernetes.
Because those services are not going to scale equally, right? You're going to want to scale some more than others and everything else. One of the advantages of Kubernetes I also wanted to highlight– using Container orchestration in general– is whenever you're pushing updates and rolling updates for this stuff, like I did. Which was in a very, very small scale. But when you're talking about a very big scale, like Marek is doing, you can use the tags and the labels– you can do rolling updates. Kubernetes actually has support for just doing rolling updates. So it'll take down half your pods and deploy the new pods pretty much transparently. But another advantage is if you have– coupling between these microservices. Where let's say you have an interface that's going to break compatibility. And so the way that these two microservices talk to each other has to be updated at the same time. Kubernetes makes it very easy to do that, as well, using those labels. So as long as you have the proper labels, you can roll over those services together.
So I'm personally not a DevOps guy, which is probably why I absolutely love containerisation in general. And then also, even more so, Kubernetes and Container Engine. Because it makes me the DevOps guy. I don't do the bash scripts. At every other job I've worked at, I turn over the application code to these guys that are building RPMs, and they live inside their terminal. And they're super fast typers and have all hot keys. And that's not me. But this kind of stuff makes that very accessible. And it makes you feel like you do have good oversight to your DevOps. So actually, because we have extra time, I'd like to turn it back over to my demo real quick. And show one other thing. So you guys already saw the fleet of containers that we deployed on a command line. So if you're using Container Engine– we have this great little dashboard that you can run. Oh, I messed up. And this helps you visualize your cluster. Oh man. AUDIENCE: [INAUDIBLE] BRENT SHAFFER: Oh, thanks, guys.
There she is. So the reason why this is indicating a warning is because I spun up too many. So I spun up 30, but I only have four Compute Engine instances that are available. So there's a handful of these pods that couldn't be deployed, because they have a minimum amount of requirements that can't be met. So whether it's CPU requirements or something else. Yes. This is CPU requirements. So one of the primary tenets of Kubernetes is that it abstracts your container orchestration from the hardware that it runs on. Or the VMs that it runs on. So the scaling of the pods happens automatically. And it happens separately from the VMs that you give Kubernetes. Because Kubernetes is going to take care of all that. It's going to make sure that it schedules the pods in the right places and the best places. So in this case, if I gave it a super beefy VM, and then I gave it a one CPU, horrible old VM, it's going to use that if it can for resources that it can. To make the best of it.
So that's pretty much it. Kubernetes– it does get very complicated. I think one of the things I wanted to show in this presentation is that it doesn't have to be. There's a lot of really, really good resources out there on Kubernetes. And so I don't need to be another one. There's a udacity course online on Kubernetes. That's great. There's a lot of YouTube videos and presentations that have happened at other Google conferences that are great. But, hopefully, you guys get the feeling that you can take an existing PHP application, or whichever one that you're planning to build next, and give this a try. Give containers a chance because they're great. And Kubernetes itself can be a little bit intimidating at first, because you do have these configuration files and stuff. Which is why App Engine is great, right? You just G Cloud app, deploy. Your containers are running in the cloud. It's a good way to dip your toe in. And then when you're ready for the next step, you can go ahead and pull those containers down and start running a whole fleet of these things.
And that's pretty fun. [MUSIC PLAYING]
Does setting up a scalable and secure PHP application sound insurmountable? Are you building your apps with the LAMP stack? This video will teach you how to use Google Cloud Platform’s (GCP) security and scalability in the PHP ecosystem. Learn how to deploy LAMP to Google Container Engine and separate MySQL and application code into distinct containers to handle increasing traffic. By the end of the video, you’ll know how to deploy an autoscaling WordPress cluster using Kubernetes.
Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Application Development here: https://goo.gl/YFgZpl