All of these things you've got to plug into your own system and figure out what you're going to do with it. And you spend a lot more effort figuring out how to unpack that than actually analyzing the data itself. So ultimately, which are yoy? Are you an investor or are you a data plumber? So my personal philosophy is it shouldn't be this hard. How can Cloud help? Just to summarize, Google Cloud Platform is building a data marketplace. We are hosting commercial data on Google Cloud Platform, simplifying distribution. So if you have data in any of these formats, we are targeting specific Google Cloud Products you distribute them through. In terms of access, they are all authenticated through one single mechanism. So if you work with five different data providers, you're not authenticating five different times using OAuth2 or passwords or special HTP headers. If you're on Cloud Platform, you use Cloud IAM. It's the same authentication mechanism you're already using as a Cloud user.
The idea here is you can log in once and access all of them all at the same place. And once it's accessible in the same place, it's ready for use by your analysts. You can use SQL directly for data in BigQuery. You can use BigQuery itself. You can export to Microsoft Excel. You can pull data from Google Sheets. For the technical members in the audience, you can also use JDBC/ODBC connectors. If you want to throw a biz layer on top, BigQuery works out of the box with Data Studio, Tableau, Click, all those partners up there. Let's say you have data scientists. Now you can work with data from BigQuery Google Cloud Storage. We have a product called Cloud Data Lab, which is a Jupyter notebook. If you're familiar with scientific or numerical computing in a Python library called Pandas, we have a product for that. If you want to start exploring machine learning with TensorFlow, reads from BigQuery and also Google Cloud Storage– ready to go. And if you have a data engineering team, great.
You can work with BigQuery Google Cloud Storage and also Cloud Pub/Sub. Cloud Pub/Sub is our streaming messaging product. From there, you can process it using data PROC if you're familiar with Hadoop and SPARC, or you can use Cloud Data Flow, which unifies both batch and stream processing. Why are we doing all of this? We want to do this to accelerate your time to insight. So today, we have a few launch partners who will be coming up, talking about how they are delivering data specifically to address problems in the financial services space. And I'd like to invite up Al Chang, CTO from Xignite. AL CHANG: Thank you, Matt. So I'm Al Chang, CTO for Xignite. And I want to thank Matt for inviting us here to speak at the session. But more importantly, I wanted to thank Matt for reaching out to us late last year and really being instrumental in getting us up to speed on BigQuery in such a short amount of time. So in the next few minutes, what I'd like to do is at least give you a little background on who we are, Xignite, what we do, how BigQuery plays into our offerings, and end with a short demo.
Xignite delivers financial market data over the cloud via REST APIs. We've been at this for close to 10 years now and we have about 1,000 customers. We're serving about 1 trillion requests per year. It's not Google scale, but for us, it's pretty big for a 45 person company. So where we play is basically– we're very strong in the fintech space. These are the guys who are trying to disrupt the traditional financial services industry. Our customers include the largest robo advisors. You've probably heard of these names like Wealth Front, Betterment, Personal Capital. These are our customers. These are the guys who are basically looking at financial advisors who are hand-picking stocks and managing your money, and they're automating it. And by the way, if you've seen the news these days, ETS are outperforming most active funds. And these guys, the robo advisers, they can charge a fraction of the cost because they have a much lower cost basis. We have some of the most popular mobile brokers like Robin Hood, who's making a name for themselves by offering $0 commission trades.
We have some of the fastest growing online brokers like Motif– Motif Investment. These guys define themes like sustainability, fair trade, and they will help you invest your money in the stocks and security that align with those motifs. They also let the community define their own motifs so they can also share. So it's a very interesting social meets finance kind of a thing. And StockTwits– this is an example of one of the biggest social finance platforms. These are the guys that invented the cash tag. So the point I'm trying to make here is that these are examples of startups that had this great idea for financial help, and now all they really need is the market data to power it. And so it should be pretty simple and it is. So if you go to the traditional providers– the Thomson Reuters, the Bloombergs, the IDCs, you first pay a hefty licensing fee. Then you start to build your data feed handlers against legacy proprietary protocols built in the 70s. You put your servers out in colocations in New York and Chicago to be closer to the exchanges and [INAUDIBLE] plants.
You've got to build out your own production database infrastructure to store the data that you actually have to pull down. And how you build this service infrastructure that actually serves the data out to your applications just so your iPhone app can show the current price of Google. We think that's kind of crazy, especially because all of this is not a differentiator for these companies. If you're a robo advisor, if you're doing a social media platform, you just need the data. You don't want to spend that much time and money doing something that's really a commodity. So this is where Xignite comes in. We offer what we refer to as a zero footprint market data API. So basically what that means is zero production infrastructure to get financial market data into your route. We have REST APIs, which is basically the same thing as typing a URL into a browser. That's how easy it is. We return data in XML or JSON, which any application developer can easily bring into a mobile app or their websites.
We offer data across all asset classes, currencies, metals, commodities, equities, options, indices. We have real-time data, historical data, company fundamentals– basically everything you need for a financial application. In addition to these request response APIs, we also have other ways to get the data. We can stream the data directly to your device. We have the ability to let you define market conditions that notify you when it occurs. Maybe your favorite stock hits a certain threshold. You want email or such. We have HTML 5 widgets that are predefined, prepackaged, and customizable, which you can directly put into your app. Once again, zero footprint– you don't have to host a widgets. We host them. We manage the checking of all the conditions on our side. Cloud streaming– same thine. All the infrastructure is on our side so you don't have to put anything on your side. So which brings us to why am I here? Well, we do have one area which we have, quite honestly, a very suboptimal solution for, which is basically our customers who want to do analytics.
They have certain algorithms to try to predict the way the market's going to go with different data. They want to do backtesting based on the historicals to get a feel for how accurate their stuff would have happened in the past. So today, we have the opportunity to download bulk CSVs, which people still use today. We have historical options data, index data, equities data. We even have all the trades and quotes for US exchange for the past 10 years. That's a ton of data, which is just terabytes and terabytes of data– well, I forget I'm at Google– it's only terabytes of data. It's not petabytes of data. But still, that's a ton of time, which quite honestly, the easy way to get the data is sort of what Matt referred to– it's actually getting a hard disk with the data and shipped it to them. It's much faster than trying to bring it over the internet. So what I'd like to do now is show you what we've been working on. It's basically taking our historical data and bringing it into Big Query and making it available directly from the Google Cloud Platform.
Can we switch the screens? OK. So this first screen here is basically Google BigQuery. We have tables of historical data for currencies, equities, indexes, and metals. And the quick very basic demo– but hopefully this will get the point across that I'd like to show you– is, let's say that you've heard that gold is a great hedge against the stock market. If the market's going down, gold goes up and vice versa. So let's do it. Let's just test that out. What I have here on my laptop is basically Our Studio– very common tool. You can use any tool you want– just giving you an example– where I'm essentially just making a script that makes a direct SQL call to BigQuery to pull the historical time series data for gold for– its kind of funny. Originally in my demo was the S&P 500. But since Thani's here, I just changed it five minutes ago to pull the Dow Jones Industrial Average. [LAUGHING] For the Dow Jones Industrial Average and for a gold mining stock– Newport Mining.
So let's really do a query, check the correlations, and plot the data. So you run this thing. And up pops the quick graph. It shows you– the red line, I believe is gold, green line is the Dow Jones, and the blue line is Newport Mining. So even visually, you can see there's a very interesting correlation, but you can actually get the actual values as well– a correlation of 0.65 with Newport Mining and a negative 0.45 with the Dow Jones Industrial Average. So you get a feel for not just is it negative and positive, but even how much. Now, really simple example, right? But the point here is that I never had to download CSV. I never had to put in my own database. I never had to maintain that solution. If I wanted to, I could even go back here and say, well, let me just check against the S&P 500. Take a different one, run it. Where's my run? There it is. Here it is. And you can do this over and over again. So now let's pull some more data. I want to try it out. It's all out there in the cloud.
So can we switch back to the presentation? So just in closing, I just want to share with you that today, we do have sample data up there for you to try and get a feel for how easy it is. And we have obviously a lot more data that's in our sample, but this is at least an example to give you a flavor for how easy it is. Thank you. [APPLAUSE] MATT TAI: All right. Thank you, Al. So I don't know if this was clear earlier– every provider we have on stage today we've asked to provide a sample. If you're interested in licensing their commercial data, you work with them directly. So next up, I'd like to introduce Jeremy Sicklick, CEO of HouseCanary. JEREMY SICKLICK: Great. Thank you, Matt. Thank you, everyone. So today we're going to talk a little bit about residential real estate. Residential real estate in the United States is a $30 trillion market. It's the largest asset class in the United States or the world for that matter. And whether you're a major hedge fund, you're a big investor, you're a big lender, or even just a homeowner, we all struggle with two fundamental questions.
Number one, what is the value of this property? And number two, is the price going to go up or is it going to go down? And at HouseCanary, that's what we've been focused on solving and providing to a number of sophisticated investors and lenders over the last couple of years using massive data sets and a lot of machine learning that I know is being talked about a lot. And we're excited to put a couple of parts of our platform on the Google Cloud platform today. So let's talk about the state of affairs in real estate today. It's massively fragmented. In essence, you have $120 million properties in the United States every property is its own unique snowflake. They're all different. And underlying that, the data is super fragmented. You've got 3,000 or 3,200 different county assessors and public recorders, 750 different multiple listing services, a number of other sources. The problem is is every one of those data sources are all different. There's no common data standards.
So the information that you get from one never aligns with the next. The definitions of what it means are all different. And in many cases, it's all still paper-based. That's what's slowing down the process of buying homes, lending on them. So what we've been really focused on is really bringing this information together about this massive asset class, bringing in the 21st century, and largely indexing all the details about every home, every mortgage, and every neighborhood. And when you do that, there's three things that this really starts to enable. Number one, we can get deeply detailed. Today, if you read the newspaper, it'll come out that Kay Schiller says that Chicago is up by 4%. But what does that mean about my local area? For us, we're now providing that level of detail by every 381 US metropolitan areas, 18,000 zip codes– every zip code where there's a population of 1,000 or more– and now blocks, three million blocks. How are prices changing– getting down to the details.
The second area is really being predictive. Past is not prologue for the future. I think we've all realized that through what's happened in the markets over time. And the idea is to bring the history but then also use machine learning and predictive analytics to really forecast what's going to happen in the markets using data that no single human could ever figure out. It's too complicated at this point. And finally just bringing the amount of data that you can bear and building highly accurate values for 90 million homes that are updated monthly, rental values for 60 million homes, given how large single family rental is now as an asset class. So to make this real, what we wanted to do and what we did– it was a really easy process for us– is we loaded about a billion data points from HouseCanary, from our platform, into BigQuery. And then we basically put on top Tableau. So it's just fully stock what we're going to show– just pulling some examples out. So given that we're in San Francisco, I wanted to start with an example around prices.
What's happened with prices in San Francisco over the last year? Most people would say, well, prices changed by about 9% right in this area. But if you look at it, on the low end, you've got Pacific Heights in red that basically grew by 7.1%. You've got the Mission a couple of miles away at a zip code level that grew at 13.6%. And if you go out to the East Bay and parts of Oakland, it's even grown by a lot more than that. But now, the idea is to not only deal at the zip code level, but actually to then dig in block by block. That's where investments are made. That's where money is made and lost. And so for us, dealing now at the block level, we dig into these zip codes and provide this information. There's these nice parent-child relationships that we've linked everything together so I can look at multiple levels and now we're down at the block level, looking into the future. And basically, I can see that there's parts of San Francisco that are likely to decline and are declining right now at a block level– parts of the Presidio.
And you go a few miles away, and prices are going up– double digits. And for us, that's the dispersion of returns, getting down to the segmentation. That's where the information's really available and the opportunities for making better investment and lending decisions. So now, let's actually think about how we might actually take that one more level down to properties. For HouseCanary, what we've done and what we've put up into BigQuery and GCP is basically for every single property, we have a unique identifier linked to the address, linked to the latitude and longitude. So now you can start actually looking at every property, not only against HouseCanary data, but linking it to your own databases, linking it out to other data sets that we'll talk about, where latitude and longitude and joins are really important. And you can start identifying opportunities. In this case, we just pulled up two properties sold at about the same time, look very similar. What's going to happen over the next year?
One's likely to have price declines, the other is likely to have price increases. But this isn't really all about doing this with two properties. This is about lots of properties. And so whether you're a major lender, a major investor, a retailer, this is where the use case really comes in for how to use this kind of information. So maybe I'm a major lender and I'm thinking about– I hold loans in the San Francisco area. For those loans that I have, which ones are going to see increases, which ones are going to see declines? Where is there opportunity? Where is there risk? How do I change my lending practices? How do I manage risk ahead of time? I might also be an investor that's not comparing to my current portfolio. But I might be looking for new investment opportunities, and you can start to think about how I start to look at investment opportunities that may be coming up through different bankruptcies, through MLS, linking it to our evaluations, our rental analytics if I'm looking to rent, and thinking about where prices are starting to increase or decrease.
Many, many opportunities to basically search– find opportunities. So on GCP, we've basically loaded– as I mentioned– about a billion data points to begin off of our platform. There's a lot more where this came from, and based on interest and where the interest and everybody guides us, we'll continue to add more. But really what we've done and what we started with was basically adding first home price indices– home price indices for all US metro areas, 18,000 zip codes that cover about 95% of the US population, and 3 million blocks. And for each of those, we've basically provided historical monthly updates for the last 40 years. So you can see about three economic cycles and forecasts. We've also included our valuations that are third-party reviewed and ranked as the most accurate in the market for 90 million homes. And then finally, our rental values on 60 million homes that are used by some of the big household names in the big institutional investors, who have gone out, bought a lot of real estate, set rents.
And all of these are updated monthly at this time. So as Matt talked about, where we see the real opportunity is some of those heat maps and digging in, and starting to join and analyze. It used to take us months to really do this with whole teams of analysts. And I was helping buy a lot of properties out of the Lehman crisis and out of the downturn. And now, it takes a person a couple of minutes to really start joining, analyzing, identifying opportunities, identifying risk. And for us, that's what's exciting about this, is the ability to just make better decisions much, much faster than ever before. And so with that, here's the URL. You can go try it out. And we'd love to get your input. Thank you very much. [APPLAUSE] MATT TAI: All right. Thank you, Jeremy. Next up, I'd like to invite Thani Sokka from Dow Jones to talk about his product. THANI SOKKA: Thanks, Matt, and thanks for having us here and talk about Dow Jones and the type of content and data that we have and how financial firms and investors are leveraging news content to help accelerate their insights.
So the data landscape is changing quite a bit. I think everybody knows that. We've seen it quite a bit at the conference as well over the last couple of days. More and more companies are putting a heavy investment into big data, and they realize that in order to really stay relevant in the future in coming years, they need to evaluate how they can really leverage big data in a way that can actually give them a competitive advantage and so forth, and stay relevant in their respective lines of business. And there's a great opportunity for us now with the accessibility and availability of a lot of these new technologies– a lot of the technologies that we've heard over the past couple of days at Google, a lot of the open source technologies around big data, what the cloud computing capacity that we now have available to us through the Cloud and so on. And so now, organizations are really looking at leveraging that technology, leveraging the content data they have in order to be able to really drive innovation within their organizations.
They're trying to move up this stack from just data information to automating things like knowledge and wisdom. You hear this all the time in terms of what people are trying to do with things like machine learning and artificial intelligence. And so really, companies are trying to move passed information. Information is no longer enough to really stay ahead. People really need actionable intelligence that's very relevant to their use case, that's accessible, that's also customizable for their particular business case. And this is where we feel Dow Jones can really help address a lot of those challenges and really help customers and investors like yourselves and financial firms like yourselves take advantage of the premium news content. Dow Jones has been creating news around economics, around finance, around businesses, and so forth. That's our bread and butter. We've been doing it for over 125 years. We are the Wall Street Journal, we are Market Watch, we are Barrons, we are Financial News.
So we have a wealth of content. And not only that, we also have a very rich taxonomy. So we have a robust data strategy team that actually curates and updates the taxonomy that we apply on top of our content, which makes now our content even more valuable in terms of how you might be able to leverage it within your firm to deliver insights around your particular use cases. We have over 30,000 sources of content. So beyond just the content that we create, we actually ingest all sorts of other publications, premium news across the globe. And we actually then apply this taxonomy on top of it in over 28 different languages. We have over 19.5 million public and private companies that are encoded in this taxonomy. We have over 1,500 new subjects that are encoded in this taxonomy, over 2,400 regions, and we have over 1.2 billion documents and growing in our archive. So it's a wealth of content that now people are starting to leverage in terms of whether it's quant firms, whether it's wealth managers, whether it's financial advisors– in terms of being able to take that and build predictive models now that can help drive how they do their investments.
So we're very excited to do a little soft launch of Dow Jones DNA. DNA stands for Data News and Analytics. It's a new platform that we've developed completely on Google. It's leveraging a lot of Google's big data technologies– a lot of the stuff that Matt talked about earlier. We're leveraging Dataflow and Dataproc and Google Cloud Storage and BigQuery to make our wealth of content now available to customers in a much more accessible way, that can really help them innovate and push the envelope in terms of how they could leverage content within their business. We have three main ways of being able to access this content. We have what's called Snapshots, to be able to get large archives. In the past, when customers came to us and said, hey, we'd like to get a year's worth of your news archive, it would take us weeks to deliver it, and we'd deliver it via FTP and CSV files. And it was really difficult for a customer to be able to really take advantage of that.
Now, with things like Google Cloud, we can deliver a 30-year archive, a 100-year archive, in literally an hour– 30 minutes to an hour. So it's really changing the way customers can take advantage of our content, whether it's to do backtesting to build predictive models, whether it's to leverage or to do specific research around very specific use cases, and so on. Another way we're delivering this is through what we're calling Stream. We're actually using Google Cloud Pub/Sub to be able to deliver our news content in near real time to customers. So imagine now you're going to be able to go in, leverage a 30-year archive of news, build a predictive model, and then apply that predictive model now on a stream, and now essentially delivering to you real time insights that actually is actionable that you can actually take and start automating within your organization. And then we also have a rich set of APIs that we're building out on top of the cloud as well, to be able to do much more of the request response type of use cases, where you might want specific data.
Because we also have a wealth of company data, for example, and so forth. So you may want to understand what's going on with a specific company in terms of their profile and you may want to correlate that with news content and so forth. We have search APIs as well to really dive deeper into our news content if you're looking for very specific pieces of information. And we're building a very accessible ecosystem around this as well to make it easier for customers to be able to leverage this new platform. We're launching a new developer portal, we're building out a customer solutions engineering team ourselves, and we're really building on our partner ecosystem with partners like Google in terms of making our data more available to our customers. And these are some of the very cool things that people are starting to do now with this new platform and with this new accessibility to premium news content. Internally, we had a little code-athon, and literally in a matter of days, we were able to build these cool little assets.
We were actually leveraging Google's natural language processing API on top of this platform to do things like geo encoding and sentiment analysis and so forth on top of the data. We were doing things– leveraging our streams to show real time trending topics, trending companies, trending sectors, things of that nature that you could start doing. And then we also built out a little demo to show where you can type in a company and you can see the news volume about that company over a timeline. And all of this now is very easy, even for you guys to do internally within your business. And these are some of the other things that people are, for example, leveraging our content for. Imagine, being able to go in– again, because we have a very robust taxonomy– to be able to quickly understand the impact of various trade sanctions across the globe. So basically searching through our taxonomy that talks about compliance and embargo and sanctions– you don't have to be an expert in these things.
You can actually leverage our taxonomy and build on top of that. Imagine being able to understand the effect of China's economic slowdown across the globe. Again, you can mine our content for key terms, keywords, across our taxonomy in terms of looking for economic slowdown, creating a heat map around that type of stuff, and understanding, whoa, there seems to be some really hot areas in terms of economic impact of potentially the Chinese economic slowdown. Imagine being able to track vaccination trends across the globe with our news content, and how does that impact, for example, investments in the pharma industry? Oh, we see the spread of Zika go across these other regions. Do we need to invest in vaccinations there? Are pharma investing in vaccinations in those kind of areas, and so forth? So this is just the tip of the iceberg. You can imagine a lot of different ways that you could leverage this news content to really drive how you guys are doing investment. And that's really the power of this platform and what premium news can really do for your business.
So we love people to come in and try it out. We have a sample data set that's out there in BigQuery. We always have a much more robust data set than that if you actually license our entire content. But we have a nice little snippet here that you can play with. It's actually Brexit data or related to Brexit in a way, because we have all of our news that we created, the days up to the Brexit and a few days after. So you can start digging in and seeing and analyzing the impact of the Brexit was across the globe. And then we also have a nice landing page if you guys want to go and learn more about our DNA platform, learn more about our content, learn more about what other customers are doing with leveraging this new platform. So we're excited to see customers leverage premium news in different ways than the ability to in prior times. [APPLAUSE] MATT TAI: OK, great. Thank you, Thani. So next up we actually have Accuweather. I wasn't sure if they would be able to make it this time, but today we have Rosemary joining from Accuweather, who will actually be covering her own slides.
ROSEMARY RADICH: Thank you. All right. OK. So how many of you, when you're meeting a stranger for the first time, you have to try to think of something to talk to them about? Well, usually one of the first things you talk about is either television or the weather, because weather affects so much of our everyday lives. It affects how many steps we take, it affects what music we listen to, it affects– and of course, most importantly– what consumers buy, which is what investors really care about. But there are some challenges in order to predict human behavior. It requires a large amount of data and a large amount of technology. And we know the environment has a strong impact on consumers, but again, predicting human behavior is very difficult. I'm a sociologist by training, and we usually say, if it was easy, it would be rocket science. So every piece of information you have to help you predict to what consumers are going to do will help you improve your model. So the environment has a huge impact on consumers and most importantly, weather has an effect as well.
So our solution– we want to make sure that you have access to highly accurate and reliable historical weather data. So if you have bad data coming in, no matter how good your modeling is, how advanced your machine learning techniques are, if the data behind it is bad, then the output is going to be bad as well. We've had several of our clients come to us saying, oh, we heard about this other data source. The website looked great and we did a little bit of digging, and we found out it was some guy working out of his mom's basement playing X-Box half the time. So you want to make sure you do your due diligence when you're looking for these data sources to ingest. So having a good source like Google with reliable partners is very important. And so having this weather information allows you to unlock these aspects of human behavior and to uncover new insights. So keeping this short and sweet, if you want to try our dataset, here is the link. We have minimum, maximum, temperature, precipitation, and snow fall.
So it's a really small subsection of the vast amount of data that we have. It's just one year of data but we actually have highly granular data going all the way back to 1951. We have hundreds of different metrics, including RealFeel, where you can see the impact that temperature has on consumers. So we have a large amount of data. So if you're interested in more information for what we have in terms of spatial, temporal, granularity, and additional weather metrics, feel free to come talk to me later. Thank you. [APPLAUSE] MATT TAI: So thank you, Accuweather. They're just getting started in the program. The sample is just a very small snippet of what they offer. We'll be working with them to develop a more comprehensive offering in the near future. Next off, I also was surprised that we were able to have this speaker from Remine here. So we'll have Jonathan from Remine covering his own slides as well. JONATHAN SPINETTO: Hi. Good evening, guys. I'm Jonathan Spinetto with Remine, and what we focus on is predictive scale intelligence around SFR real estate.
So what we do is we take a look at all the data. Like Jeremy mentioned earlier, it is a lot. Over 3,200 assessors nationwide, a multitude of different variables. And we combine it with transactional property data and we also combine it with consumer intelligence. So bringing and attributes such as month and year of birth and other individual household level demographic attributes to understand when a house will be going up for sale. So as a realtor, a mortgage professional, or other type of industry, understanding when those homes will be selling is what we focus on. And like we talk about, trying to obtain that 360-degree view is important. So my background is actually full-time in real estate. So I've been a licensed agent since I was 18 years old. Myself and my other business partner, we participated in over 6,000 residential transactions nationwide– one transaction at a time, working with banks or working with other parties. And it really gives you an insight to understand what goes on at that level, what is in the mindset, where are there certain life events and life actions that we can break down these activities into and understanding equity, understanding length of ownership.
Because usually what you hear is the statistic that the National Association of Realtors puts out that a home sells every seven years. Well, let's get more granular with that and let's understand how we can figure out where that information is going to come into play. So what we do is we look at a classification of understanding that sale in a si-month, 12-month, 18-month, and a 24-month-plus frame. So by bringing in all of this data together, we're able to bring this data, now available on the Google Cloud as a commercial data set– and one of the things that we do have up there is a snippet from our data. It's in Alaska, in Anchorage, where we have that classification of understanding these sales up there. So take a look. Try it out. Reach out to us. We're happy to discuss more about what we have around that predictive sale and other attributes across the United States for SFR, and we're happy to be working with Google on this. Thank you, guys. [APPLAUSE] MATT TAI: OK, great.
Thank you, Jonathan. And so just to wrap this up, we had a great showcase of launch partners here who really saw where BigQuery could go, and pretty much jumped in with us. Why are we doing all of this? We are accelerating your time to insight by doing all of these things– free hosting the data on GCP, simplifying distribution, simplifying access, and making it accessible for not just analysts, but data scientists and data engineers as well. You can learn more at cloud.google.com/commercial datasets. On there you will see a listing of all five of our partners today, links to their sample data sets as well as how to contact them for licensing details on their offerings. [MUSIC PLAYING]
Learn how our partners are distributing financial data like historical currency/metal prices, real estate valuations/predictions, news, and weather via Google Cloud Platform.
Missed the conference? Watch all the talks here: https://goo.gl/c1Vs3h
Watch more talks about Big Data & Machine Learning here: https://goo.gl/OcqI9k