Google Cloud NEXT '17 - News and Updates

Startups share insights on building enterprise solutions (Google Cloud Next ’17)

NEXT '17
Rate this post
(Video Transcript)
[MUSIC PLAYING] SAM O'KEEFE: Welcome. Thank you for joining us in the Make Google Cloud work for you session. In this session, we're going to hear from four very distinct, very unique, and interesting startups all leveraging Google Cloud Platform to build better products and services for their companies. My name is Sam O'Keefe. I work with the startup team here at Google Cloud. And if you are part of that team and in the audience, if you can just raise your hand for me for a second. So we've got a bunch of people up at the front. Take a look at these people, because if any of what you hear today or have heard all week resonates, and you want to get your startup on Google Cloud, or you want to talk to someone about if it's the right fit, how do you do that, these are the people that can help you out with that. So we're going to take some questions from the startups following all presentations, so keep anything you hear any question locked away. We will have time for questions at the end.

And without further ado, I'd like to bring up our first company. So we have Indraneel from LiftIgniter. This is a bit of a homecoming for Indraneel. He was a research scientist here at Google on the Sibyl team and is now taking everything he learned here and creating an awesome new company. So please join me in welcoming Indraneel. [APPLAUSE] INDRANEEL MUKHERJEE: Thanks a lot for the introduction. Thanks everyone for coming here. So we are LiftIgniter, and we use machine learning to personalize all your digital experiences. So our core thesis is everyone is unique at every moment in time. And the ideal digital experience should flow like a fluid conversation. The website should constantly update its contents based on every action you take. And here is an example of a site that does just that. Every time you watch a YouTube video, a very sophisticated AI picks 20 videos for you to watch next from a pool of three billion plus videos. And it tracks everything you do, every click, every search.

And it's constantly updating its contents with one sole goal, make spend more minutes on YouTube. Now, I know all this because I was part of the five-member team at that time called Sibyl, which built out the YouTube recommendations. And when we launched five years ago, we added a huge improvement on the core watch time metrics, on top of what existed at that time, which is already a very sophisticated system. And the reason we could do this is because we took some very cutting edge machine learning techniques and implemented them at Google scale. And that drew all those improvements and a huge chunk of incremental revenue. And it was powerful enough that we could roll it out onto all other major Google properties like AdMob, Gmail, Google+, Play Store, Product Listing Ads, you name it. So what we're doing at LiftIgniter is taking that extremely flexible and powerful technology, and making it available to the rest of the world. And we are doing it on Google Cloud platforms. Thank you, Google, for everything.

So we are super excited because the opportunity is incredibly large. Almost everything you do online, whether you're browsing content, buying something for your friend, looking for the optimal cab ride, everything can be optimized using machine learning personalization. And we hope LiftIgniter is a service that does that. So but to make this a reality, you need some serious computing power, which is why Google was a pioneer right? It has the best infrastructure and data centers. So when I left Google, I needed a platform like that. And unfortunately, like about three years ago– three, four years ago– Google Cloud was not mature enough to support all our needs. I had to go to some other cloud providers. But I'm very happy to say that GCP is at a point where a small team like us were compelled to make the switch over. And there are a variety of reasons. One of them, for instance, there are some cool new services which are just not available, like containerization as a service which is very critical for managing our idea of microservices.

Pricing is kind of cool because we have a lot of heavy workloads, and we don't have to do upfront commitments to get competitive rates. But closest to my heart is an opportunity for a small business, small or medium sized business, to build a brand. A better platform for exposure to consumers for instance, speaking at this event or other joint marketing activities, which the other cloud providers don't really help you with that much. So if you're a small or medium business or even a big business trying to build a particular brand, I strongly urge you to seriously consider switching to GCP. It's going to be worth it. So here's an example of how we are using it. So we're just migrating in, so we're not fully using every service possible, but we are already touching a bunch. So Google pops up for consuming this fire hose of events. We are actually consuming four billion pages per month already. It's a five-member engineering team. And we've built this amazing scalable technology, because it's possible now with Google Cloud and other providers.

So GCP pops up for consuming the data fire hose, Bigtable for storing a lot of our inventory, BigQuery for doing a lot of the analytics, and a lot of computer clusters for doing some very serious computation. So 4 billion pages per month. 300 million items scanned daily. And we can easily scale this 10x to 100x with very little effort because all the features available. So who are some of the customers who are creating all this data for us? So you're going to see a bunch of logos. The main takeaway is these are some of the biggest brands across a variety of verticals. Like video, music, content, e-commerce, B2B, you name it. We average 80% improvement using our personalization across all these verticals. It just gives you a sense of what the power and flexibility is. So let's dive into a couple of use cases. Everyone's heard of Vevo. It's the largest music video company. And they use us to power music recommendations across their website, as well as all other apps, so all kinds of devices– Android, iPhone, television, et cetera.

And they chose us after doing a five day head-to-head A/B test along with four other personalization providers. We're happy to say we not only won, we beat second spot by 40%. And the main metric of interest was a certain kind of engagement that they were interested in. And you can see it was a sustained lift every single day over a long period of time. And while we crush the performance numbers, we are not just a black box recommendation service. We offer a lot of control to fine tune our models. So as an example, the Vevo team wanted a special experience where similar sounding songs should be recommended. So their data science team had come up with these cool spectral features extracted from the songs themselves. And all they had to do was upload it to our servers. They automatically got incorporated into our modeling. They can manually tune the [? weight, ?] so that more emphasis would be placed on these acoustic similarity metrics. And voila, you have personalized recommendations with a heavy emphasis on similar sounding songs.

Another use case, e-commerce, product recommendations. You've seen this one. It's a very common application. What's not very common is the kind of improvements we drove. We added a 105% improvement in conversions, which is plain, cold sales dollars. Right? This is dollar value being added. And interestingly we also had the side effect of increasing the average cart size by 35%, even though that was not a metric we directly targeted. And this is a thing that happens a lot, which is, if you have a good personalization system, all your metrics are going to go up globally even when you're focusing on just one particular metric. So how much work was all this? Literally like five minutes. You drop a beacon, just like Google Analytics, onto your sites, and we collect all your data. No need to integrate us with the CMS. This JavaScript already scripts all the content and creates a real time snapshot of your entire content metadata. No need to dig into your databases to pull out historical data about your users.

We collect that all in real time from the client devices. And within seven days, our models are ready. And if you want to start showing recommendations, you have to do the hard work of writing one more line of JavaScript, the one below, which makes an API call and starts pulling in recommendations. Behind all of this is a lot of complex machine learning, very high dimensional regression feature. We use some of the most cutting edge machine learning techniques, and we hope to use more as Google Cloud makes a lot of the services available. That will both simplify our code base and make the whole system much better. I can take more questions later in the interest of time. The main takeaway is all this complex machine learning and all this infrastructure has one goal, create immediate and massive business value for our customers. We are the team who made YouTube addictive. And if you want to see an 80% improvement in conversions in 30 days with five minutes of integration work, please check out LiftIgniter.

Come find us in Startup Village. Thanks. [APPLAUSE] SAM O'KEEFE: Thank you, Indraneel. So if anyone else has been looking for someone to blame for their 4:00 AM cat video obsession, you can talk to him after. Next up, I'd love to welcome Josh from Twistlock. Josh started his career in developer operations before those two words became one. Here to talk about container security, please welcome Josh. [APPLAUSE] JOSHUA THORNGREN: Hi. Thanks everyone for coming out today and taking some time to hear about how Google Cloud helps startups like ourselves transform the way we deliver software to our customers. So I'm going to talk a little bit about why Twistlock, why I'm here standing in front of you today, why we're a company. With the transformation of traditional virtualized infrastructure into containerized infrastructure into microservices, what used to be monolithic applications, what used to be single VM's, now split apart across multiple artifacts. At the same time, the trend of DevOps has accelerated the way the companies deliver software and push it to production environments.

Whereas, there used to be one deployment a week, one deployment a month sometimes, now you have dozens of day. That creates a very different picture for security companies than what the world was like even as recently as five years ago. What Twistlock exists to do is deliver enterprise grade security with DevOps speed, with DevOps agility. So talking a little bit about our company. We were founded in early 2015 by a team of ex-Microsoft folks– half of a team machine learning, the team that built Cortana. The other half, the team that delivered enterprise security to Microsoft's customers. What they saw was with these shifting trends in the market, old world security, the notion let's just ring fence the VM– let's throw a firewall up around everything– no longer delivered the right protection for broken apart micro services. So they had the idea to combine their talents of machine learning and security and deliver a product that transformed the way security was performed but still provided the strength and protection that large enterprises required.

We shipped our first release in early 2016. And a year later have over 40 enterprise customers worldwide using us to protect their production infrastructure. I'm going to talk a little bit about how we do that as well as, how our team uses Google Cloud to deliver those services. And then I'll follow up with some of the synergies that we see when we work with customers who use Google Cloud themselves. First off, speaking about Twistlock, we provide, what we call end-to-end container security. Because containers are portable across any environment, it's important to protect, not only in production but in staging, in registry, even on developer workstations depending on your environment. So we provide a number of services. I'm going to highlight three of them here, vulnerability management, runtime defense, and compliance. Our vulnerability management, we scan containers at any stage of life cycle for malware, CVs, misconfigurations. Any gaps that could cause holes to open up in your environment.

The way we do this is we source a number of feeds directly from vendors and from threat providers, threat intelligence themselves. We don't leverage NVD or any aggregate clearing houses of data. We go directly to the source. That's packages, operating system, all the layers of container. We scan for vulnerabilities in real time. We integrate with tools like Jenkins or TeamCity to alert or even block builds based on the number and severity of vulnerabilities found. Our runtime defense feature is really what starts to separate Twistlock apart, and I really like to consider is what transforms the way we do security from the old world. At runtime, Twistlock leverages the static analysis that we've done in the vulnerability piece. And it combines that with machine learning that analyzes the contents of a container and uses the contents and the behavior of container to create and automatically enforce a security model– no oversight required, no manual model creation. That little bit of machine learning allows organizations to deploy quickly and have security for every container in their environment without having to write a policy for every container in their environment.

When a new container comes up and it's the same container image, that policies automatically transferred over, automatically applied. This allows teams to rapidly scale what they do without having to worry about how their security organization keeps up with that. So that's the networks, that syscalls, that's file system, and that's the whole processes running in the container itself. It covers all four layers of that. That's not where it ends though. We also offer compliance to ensure that your environment meets either company or industry standards from again, [? build to ?] runtime. We work closely with NIST to do things like translate the HIPAA security rule into a containerized environment and provide HIPAA compliance serve customers. We provide PCI compliance to our customers. And we're fully extensible with OpenSCAP standards. That means any company that's looking to provide secure compliance across their environment can leverage Twistlock to do so. That's a little bit about what we provide as far as container security.

And we provide all of this– our development team leverages Google Cloud to do so. So I'm going to talk about that a bit. When we first started, it was important for us as a cloud native company, as a company looking to transform the way security was done for the new world, we needed to find a provider that shared that same DNA as us, that really understood that the way software was being delivered is transforming at a rapid pace. And it was so evident, from the start even, that Google Cloud was the provider in the space that had that shared understanding, that shared DNA, with Twistlock. I've got speed. I've got flexibility. I've got reliability up here. And I'll talk about those. But the thing that always struck me coming in– and I'm not a member of our R&D team who made this decision. But when we looked at it, the thing always struck me was the shared terminology. When you leverage Google Cloud, when you leverage Google Container Engine, when you leverage Google's Cloud services, the way that the vocabulary in the platform, in the console, in the management tools, is written and structured, it's the same thing you see in other cloud native services.

Google talks about containers the way Docker talks about containers. The way Kubernetes talks about containers. You don't see that with other providers. You see them trying to take the language of old world virtualization and translate that over. And I know that seems like such a small thing, but when you're talking about shared DNA, that was an immediate first flag to us that, hey, there's some alignment here let's keep going. When our R&D team evaluated, what we found was Google Cloud Platform, Google Container Engine, really offered more speed than the other options out there. The Kubernetes native integration, as it were, allows our team to rapidly deliver software and build software on a platform that's shared by the majority of our customers today. The learns that we gain from that process and our ability to understand real world use cases during the development cycle, is extremely beneficial. Things like one click upgrades allow us to instantly patch or bring up our entire environment without worrying about, did we get this part?

Did we get this part? You click. It works. Done. It's not just speed though, it's the flexibility. The granular IAM controls, the ease of user management, really allows us to build different environments with different access for different teams working on different facets of our product. That type of granularity isn't something you see in AWS. That's not something you see in Azure. That flexibility coupled with a wide range of OS availability for virtual machines really was a compelling factor for us. The third piece, reliability. We're a security company. Our team needs to develop as fast as threats come in. If there is a zero day that we need to update something in our product against, we need to have uptime. We can't have our VM's down, our environment down. Compared to Azure, compared to AWS, Google Cloud offered significantly higher VM availability and uptime for our teams. The other piece, we're a security company. We need to make sure that the environment we develop in is secure.

So you look at Google Cloud, you look at the way it notifies. When there's an unpatched server, when there's an incident, and there's an open port on the firewall, you get a notification. That comes to you. There's no logging into a centralized console to get that information. There's no digging through a series of menus. That information is surfaced, and it surfaced in real time. For a company that prides itself on how we deliver security, seeing that in our cloud provider was invaluable. Those are some of the reasons that our team chose Google Cloud for our deployments. It's helped us scale from releasing a product a year ago, to serving over 40 customers today. I'd like to talk briefly about one of those customers and the scale and the efficiencies they're able to see by using both Twistlock and Google Cloud. Sadly, this is one of those things where, I'm a security company, everyone seems a little jittery about being up on a slide as oh, Twistlock provides our security.

Understandably so. So we'll just say a leading US media company, joint customer of both Google Cloud and Twistlock. They have over 400 Docker hosts today and plan to scale that number to 1,000 by the end of this year. They have over 1,000 images in their registry. This spans 12 business units, hundreds of developers, all of which make multiple deployments a day. In the old world, before Google Cloud, before Twistlock, to secure this type of environment would require teams. Teams for each business unit here. This is all done now with a single security architect. A single person responsible for monitoring and enforcing policy across this environment and ensuring security and compliance as this scale continues. This is the cloud native dream. This is the way the world is moving where security is no longer a stop sign. Security is no longer a stoplight. It's a traffic camera. That's the promise of enterprise security with DevOps agility. That's what we do here at Twistlock.

Feel free to come visit us in the Startup Village later today. Also, feel free to get in touch Twistlock.com. Twistlock.com/demo. Thank you all so much. [APPLAUSE] SAM O'KEEFE: Thank you so much, Josh. It's really amazing when you hear those numbers of taking an entire team and making just one person able to do everything they needed to do before. Very impressive. Our third company is Greta.io. Presenting is CEO and co-founder, Anna Ottosson. If you can't tell she's from Sweden, joining us all the way from Sweden today. And fun fact about Greta, it may be the only technical company I've ever met– feel free to correct me if someone can combat this– that was named after a founder's grandmother. Grand-grandmother, so this has a lot of history. Please welcome Anna to the stage. [APPLAUSE] ANNA OTTOSSON: Hi, everyone. I am Anna. And I'm the CEO and one of the founders of Greta.io. We're extremely happy to be partners of Google. And I'm very, very glad to be here today.

So before I started Greta, I worked at a large European media company, where we, amongst other things, streamed Champions League, which, at least in Europe, is a very big thing. But what we learned the hard way, really, was that during the games, and especially during the most important ones, our viewers would way too often experience problems. On a bad day, it could mean a crashed service. But even on a good day, they would often be met by bad image quality or buffering, such as in this image. In this room, I think, we as consumers, can all relate to how extremely frustrating that is. It is exactly situations like that, that made us start Greta, a completely new way of delivering content over the internet. We do this by creating a new type of infrastructure layer on top of traditional CDNs. The reason is that we found that despite using CDNs, a lot of companies really struggle delivering that end user experience that we really think their users deserve, and especially during the most business critical times.

Another example of that is e-commerce. E-commerce companies often spend months preparing for new campaign launches or, for example, Black Friday, but when their consumers stands there with basically money in their hand that they want to spend, they're often met by a crashed site or an extremely slow one. And the reason for that is basically that the internet assets built today isn't very scalable. And there are obviously a million positive things with internet, but it has two major limitations in the infrastructure today. And one of those limitations is that asset is built on physical hardware networks such as, CDNs, for example. That capacity really is limited, which means that obviously it can't handle infinite load. Another big limitation is that it's depending on the geographical spread of the network. And as you can see, that isn't very evenly distributed throughout the world. And we're currently at the stage where we're seeing consumers in multimillion cities such as Nairobi and Mumbai, for example, demanding the same type of internet experiences as we're used to here in the US and Europe, but without having the infrastructure to support it.

So we're simply in a situation where we have a supply shortage of capacity. And that will only get worse. We are more and more connected people, more connected devices, but we're also consuming heavier media types than we've ever done before. We see that today in the form of HD content, for example. But we're also seeing growing trends such as VR and 4K. Consumer services such as, Twitch TV and Netflix, for example, have also shifted the way we consume content. And it makes it more likely, now than before, that we're consuming content at roughly the same time of the day as for example, our neighbors, which further increases the load on the already strained infrastructure. At Greta, we think it's especially those times when it's extremely high load and also often very business critical such at Champions League final or Black Friday. That's the situations that we find most interesting. And the reason for that is that by using our technology you can actually provide the best user experience when it matters the most.

By adding more concurrent users to your site or service, you can actually deliver higher quality faster whilst offloading the servers. The reason that we're able to achieve this is that we utilize our own decentralized data distribution network as one of the ways in which we deliver content. So basically, this means that the more users you add to your service or site, the higher the density in our network becomes. Meaning that we can find more optimal ways of delivering content to your users. This is one of our users. It's a global audio streaming company using our technology today. And in the background, you can basically see a screenshots of how their concurrent users at one point in time are forming a Greta network. By using our technology, they've increased their throughput with average of three times, whilst offloading more than 91% to Greta's network. One of the things that we find challenging, but very interesting with building relatively complex infrastructure technology, is packaging that technology in a way that makes it truly accessible to developers and companies.

That's why we are very proud that most of the sites using our technology deploy our script without any hands on help from us. And often, they have it live in production in less than 10 minutes, which we find pretty amazing. By adding our script, you get access to our analytics platform, where we basically look at all your content delivery solutions today and how they affect your performance and user experience. We then take all that data and feed it into our intelligent routing algorithms and can thereby determine what route will actually give the end user the best possible experience. Our product today consists, amongst other things, of CDN evaluation, peer-to-peer delivery, Smart Cache solutions, as well as performance analytics and reports. And it feels like I've said the word performance minimum 100 times the last minutes. And I'm sure you are fed up with it day three of GCP, but it truly is our obsession at Greta. And it's also that obsession that's made it very natural for us to work with GCP from day one.

And I am actually not saying this because they've paid my ticket and everything, I truly mean it. And I thought some of you might find it interesting how we utilize GCP. So another big reason why we worked closely with Google already from the beginning is that we're utilizing the technology WebRTC to do a lot of what we do. And for those of you not familiar with it, it's a web standard where Google have been one of the main initiators. Furthermore, at GCP, it's obviously important for us to get access to the network. And that is why we use their load balancing. We use Google Container Engine to host our services, and we use BigQuery and CloudSQL to store data. We're obviously continuously looking at how we can improve what we do and how we can do so with the help of GCP. Some of the things that we're really excited about adding is the things to the right here. So for our persistent data, we're currently adding Cloud Spanner, which it has been a lot of talk about here at GCP Next.

So basically, the reason we use Spanner is to be able to do data replication with low latency. But the thing that we're truly excited about and waiting for is the global version of Spanner, which will be released later this year. In order to handle our network data we are adding Cloud Pub/Sub together with Dataflow. And this allows us to enrich and analyze our network data. Furthermore, we are experimenting with Google's Machine Learning Platform as part of our offering [? is ?] prediction and anomaly detection. But what I've told you here today is really only the beginning. We are extremely excited to keep working with Google, and we're very, very excited about continuing our journey towards changing the way the internet works. Thank you so much. [APPLAUSE] SAM O'KEEFE: Thank you, Anna. Before I bring up our last presenter, I want to remind everyone that following this presentation we are going to be taking questions. So there's a microphone set up in the middle here.

Don't be shy. And they can be directed at any of our four presenters, so start thinking of those questions. But now I'd love to welcome to the stage Matthew, from Incorta. Matthew describes himself as equal part engineer and equal part enterprise sales, Having spent 15 years at Oracle. Today he's here to talk about how to make analytics truly real time. Please welcome Matthew. [APPLAUSE] MATTHEW HALLIDAY: So we think that the earth is flat. We used to think that flight is impossible. We used to think that 640K was more memory than any computer system would probably ever need. We used to think that really there was only one enterprise cloud. Until last year, we used to think that the Cubs would never win the World Series. But it takes people like the Wright brothers and Pythagoras to really come and challenge these things that are really our core beliefs that are held, perceived truths, that everyone is operating under the pretense that these are true. And so it's into that world that Incorta steps.

And for the last 25 years, we've seen a lot of complexity in trying to get to performance. So Incorta has a fundamental belief that we can change that. I'm going to talk about that same mentality, that same approach, that the Wright brothers had. And you've got to remember that they were at a time when everyone was saying that it's impossible to fly unless you had a gas that was lighter than air. That was the common held belief. And so there is a common held belief that's been around for 25, 30 years, that I want to share with you today and then talk a little bit about how Incorta shatters that belief. So at the beating heart of every enterprise, you will find an ERP system. You'll find something that's registering every business activity, every sale, every order, every supply chain event, any collections, revenue. They're all going to flow through a data model that looks something like this. This data model is what we refer to as a normalized model. It's lots of tables with lots of relationships.

So you have an online transaction based processing system. This is actually a schema, that's a real world schema, that's used significantly through most of the Fortune 1000 companies and this is actually just one query. This is the content, these are the relationships in the joins that are needed to give you something. Now, this model works pretty well when you're looking at something like one transaction. This model falls apart the moment you say, I want to look at my data across everything, and I want to slice and dice it any way I want based upon the information that I have. And so for the last 25 years, we've seen a lot of innovation. But it's all been down the wrong path because they believed the joins and relationships between data could never be fast. Joins and sorts are what kill a database performance. So we started with extract transform and load, ETL processes. People started to automate them. Have automated testing for them. All of this innovation was poured into ETL.

People looked at star schemas, or cube structures, or summarized data, to be able to put the data into a shape that maybe the relational database can minimize the joins and give you that data in a timely fashion. And then there was other companies that jumped up and said, you know what, we're going to make really hefty, big pieces of hardware. We're going to charge a million dollars plus for this piece of hardware that is an appliance that will sit in your data center and promises to give you unparalleled performance. But what happened? We had frustrated business users and we had a super busy IT department. And so people would spend months, if not years, iterating on star schemas, trying to find out what my business requirements? What do my business users want to see? And then I'll spend the next year and a half building it. They'll take a look at it, and then they would say, oh, that's interesting. What about this? Well, you didn't ask me that. Let me go back and I'll come back.

And so they'll be looking at data that is static. Very, very much set in stone in terms of how they want to look at it, but also it pretty much always be yesterday's data. These ETL processes can take 12 to 13 hours, nightly refreshes, and people are just, that's what we do. That's what we've been doing? That's what we're expecting. And you would think after doing all this complex work that it would be fast. Right? Well, it was certainly faster than a relational database with joins, but it was still slow, some jobs taking five minutes even up to 90 minutes. Run your job, go for lunch, come back hoping that it's done. And so Incorta steps into that and said, if we can do something very special, we can change the fundamental approach and reverse that decision from 25 years ago, and say, there is a better way. Incorta's Direct Data Mapping is exactly that. You can think of it as a technology that enables joins to perform. It knows exactly how every piece of data within a relational model relates to each other.

No need for star schemas. You can set this up in hours and days. You can refresh in minutes. We have customers here in the Bay Area that are upgrading every five minutes. They're getting the latest and greatest data flowing into their system. And with all of that you say, OK, great. It's faster. Looks easy. It's quicker. Time to value, but surely it's slower. Well, it's not the same speed for sure. We're talking orders of magnitude– 50, 100 times faster than some of these other approaches. So going back to this schema again– I want to show you something. This is a schema we showed up at the front. I want to show you a little bit about how does this work? So that schema has over a billion records in it. I'm going to show you now 11 table joins. And we're going to see the response in 0.8 seconds. So what does that look like? So does anyone want to see it? AUDIENCE: Yeah! MATTHEW HALLIDAY: Sorry. AUDIENCE: Yeah! MATTHEW HALLIDAY: OK cool. So let's see this in action.

So here you can see a Tableau dashboard running on top of Incorta's Data Platform. And every click is resulting in all of these things being redone. If you've used Tableau in your past, you probably go, this doesn't look like the Tableau I'm familiar with. I normally expect the spinning wheel. Did you edit it out? What happened? No, this is real. If you don't believe me, stop by the booth in the Startup Village and we'll show you it in action. But this kind of response with this complexity, without the need to transform your data, has never been done before. And so next question is, why Google Cloud? Well, there's a few things here. One of the things that we notice when we're speaking with our customers is elastic scaling. We definitely wanted to have a system and a platform that enabled us to grow with the demands of our customers. We have customers here in San Francisco like [? hypergrowth ?] startups that are kind of like Spotify for clothes that are providing analytics to their users using this system.

Of course at different parts of the year, they are going to have different demands. And to be able to spin up and grow your system in less than 10 minutes is something that was really, really unique. Also, price performance. A lot of talk about performance. But really it's going to come down to a predictable cost. Can I know exactly what this is going to cost me next month or am I going to be guessing what my budget? How do I budget for this? How do I plan? And that's where Incorta felt that Google Cloud was very, very straightforward but also a very good price point for performance. And the third one, no shock here at Google right, but this would be developer friendly. Probably everyone here might well be from a developer background and it doesn't stop here. It's very unlikely that you would get invited to even speak into the direction of a cloud company as a startup. And we have found that to be very true with our relationship with Google Cloud. And so I want to talk about one of those things that has come out of this relationship with Google Cloud that we think is super exciting.

So here, I'm glad to announce that we have a special way for enterprises to get to the Cloud. Now, we have a lot of enterprises that historically have data centers. If you've been around for probably more than 10 years, you probably have a data center. And you probably know, I want to get to the cloud, but how do I get there? Am I going to take everything down and move, or do I want to do it in pieces? Or how can I augment what I have with some additional resources from the Cloud? And so Incorta provides the ability to have a single dashboard and have different components that look to the user seamless. They're just getting served content. Where that content is coming from could be Incorta running on Google Cloud or it could be Incorta running on your own data centers. So content that maybe you're not prepared to yet put in the Cloud, you can keep it on your premises locally in your data center. But data maybe that you want to use for other teams or other people to be able to leverage the power of Incorta on that platform to be able to do that with Google Cloud.

And to give you that extensibility to grow and to shrink as needed when demands are met. So with that, I'd like to invite you to come speak to us. We will be at the Startup Village for a few more hours. And if you're unable to make that, definitely start by Incorta.com. We'd love to show you a demo and to talk more about that. [MUSIC PLAYING]

 

Read the video

In this video, four different startups share their experiences building on Google Cloud. LiftIgniter is a machine learning personalization layer powering user interactions on every digital touchpoint. Incorta aggregates large, complex business data in real-time. Twistlock is a leading provider of enterprise container security solutions. Greta is a peer-to-peer distribution script, turning visitors on your website & app into distributing points of presence (POPs).

Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Big Data & Machine Learning here: https://goo.gl/OcqI9k


Leave a Comment

Your email address will not be published. Required fields are marked *

Loading...
1Code.Blog - Your #1 Code Blog