Over The Edge

Solving the Fundamental Problems of the Cloud with Chetan Venkatesh, CEO & Co-founder of Macrometa

Episode Summary

Today’s episode features an interview between Matt Trifiro and Chetan Venkatesh, CEO & Co-founder of Macrometa. In this interview, Chetan discusses his approach to solving the fundamental problems developers face with cloud-native apps, and how the edge represents a new paradigm for how applications and services are going to be built going forward.

Episode Notes

Today’s episode features an interview between Matt Trifiro and Chetan Venkatesh, CEO & Co-founder of Macrometa.

Macrometa is "Geo Distributed Fast Data As A Service"​ for cross-region, multi-cloud, and edge computing apps.

In this interview, Chetan discusses his approach to solving the fundamental problems developers face with cloud-native apps, and how the edge represents a new paradigm for how applications and services are going to be built going forward.

Key Quotes

“While the cloud is great, there's just a whole class of new things that people want to be able to do that the cloud is fundamentally not a sound platform for, [especially] If you really want to deal with things that are time-sensitive.”

“The biggest limitation of our current computing model is that we can only do computing in two places. We can either do it on the device or we can do it in the data center. And there's a vast middle mile between these two places that really doesn't do strategic stuff. It doesn't do anything useful or interesting other than shuffling bits from one place to another. We think that's a really interesting place to bring not just computing, but stateful computing, because the state part of this computing problem is very, very important.”

“One of the hidden parts of the edge that nobody really talks about is if you want multicloud, the edge is the place to do it because none of the cloud providers actually have any interest in multicloud. But the edge provides a really interesting place for us to abstract and arbitrate workloads across any cloud provider based on location, latency and regulatory requirements that the customer has for their applications.”

“There are three different parts (of this industry) that are rapidly maturing in parallel, and it's all converging towards some sort of a singularity that's going to create an explosion of value. The first part is capital–capital is getting much smarter about this problem and why it matters...The second part is customers are getting smarter...The third part is the ugly secret about the cloud: it's easy to get in, but it's really hard to get out, and if you haven't built an application with scalability in mind (things like) sloppy coding really cost you a lot of money in the cloud.”

“The telecom operators really need to supersize their thinking. A lot of them are very much still in a deer-in-the-headlights phase…The place where they're trying to intercept the market has already passed by four years back. If you're not thinking serverless, if you're not thinking developer experience as a telecom operator, I think your big-time screwed.”

“Deploying code is deploying capital. It really is in the cloud world. I think that's actually an emerging area for VCs to look at: analyzing code to figure out what does it cost to run at scale.”

Sponsors

Over the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.

The featured sponsor of this episode of Over the Edge is Packet, an Equinix company. Packet is the leader in bare metal automation. They are on a mission to protect, connect, and power the digital world with developer-friendly physical infrastructure and a neutral, interconnected ecosystem that spans over 55 global markets.  Learn more at packet.com.

Links

Connect with Matt on LinkedIn

Follow Chetan on Twitter

 

Episode Transcription

[00:00:00] Matt: Hi everybody. This is Matt Trifiro. I'm the chief marketing officer of vapor IO and also the co chair of the state of the edge project at the Linux foundation.

[00:00:15] And I'm here with Chetan Venkatesh the CEO and co founder of Macrometa. How are you doing today ?

[00:00:22] Chetan: I'm doing great, Matt. Thanks for having me on the show. I'm really excited to be here.

[00:00:27] Matt: Yeah, you bet, you know, you and I have known each other for a few years, but I don't actually know how you got started in technology. can you give us the origin story

[00:00:37] Chetan: Wow. Wow. This is gonna go back to the pregame brand age. So how much time do you have, but let's start.

[00:00:44] Matt: much as you want? Give me this chart version

[00:00:46] Chetan: Okay. I'll give you the short, short version. started as an engineer and over the course of maybe three startups really , became an operations guy. and you know, started really focused around.

[00:00:57] Building companies, building teams, building products, and [00:01:00] just working with great people to sort of realize, interesting opportunities, built around some fundamental new innovation and data and infrastructure. So I've been in data infrastructure for almost 20 years now. my startups have all been in and around distributed data and distributed infrastructure in some form of fashion.

[00:01:17] So I like to joke that I've been solving the same problem for 20 years. Obviously you have not been successful. Yeah. but, but, and the other dimension of that, Matt is that my life has always been about saving milliseconds here and there. So my last three companies really focused on sort of accelerating the data layer and traditional on prem systems.

[00:01:38] You know, examples, like if you're a bank and you're serving, you know, people online through a banking portal, people need to get to there customer records and transactions quickly. And so we wrote software that accelerated that, one of the startups was very focused on accelerating virtual machines because machines are very storage, hungry.

[00:01:56] That was the last startup by dad. and we were kind of the pioneers and what's, what's now called the [00:02:00] software defined storage space. We, we, we wrote one of the first software defined storage platforms for accelerating data. So, and now I'm sort of doing the same thing again with Micromedex and, you know, sort of giving

[00:02:10] people their

[00:02:11] Matt: Well, what was the name of that, that platform?

[00:02:13] Chetan: last couple of days called Atlanta's computing, started in 2006 and exited in 2016.

[00:02:19] Matt: Great. Great. And when did you found macrometa?

[00:02:22] Chetan: Macrometa has been pretty long in the tooth from a concept standpoint. me and my cofounder started thinking about this problem in 2014.

[00:02:29] Matt: Well, which, which problem, which problem is it?

[00:02:32] Chetan: Oh yeah. So the problem really was, you know, when we started to think about the centralized architecture of the cloud and what that would look like in 10 years, And you came to the conclusion that while the cloud is great, there's just a whole class of new things that people want to be able to do that the cloud is fundamentally not a sound platform for a, if you really want to deal with things that are time sensitive.

[00:02:53] The way we go about building clouds essentially means that they're too far away from where most people live. [00:03:00] I mean, I don't know people who go and buy a house or rent an office saying, geez, I really want to be next to us West one, because the latency is great. we buy our houses and we put our offices where it's convenient for us.

[00:03:12] And so we started thinking about, you know, what does that look like? Where the cloud is more diffused and closer to where users are. And as we sort of drill into that problem for us, it fundamentally, you know, it really opened up as sort of this bad data problem that needed to be solved. So my cofounder door guy, and I started thinking about it, you know, 2014 onward, seriously.

[00:03:32] And by 2017, I think we had sort of put the pieces together, on what a business might look like, what the technology might look like. And, you know, we, we got around a funding from a great investor and we're off to the races. So that's kinda how it all came together.

[00:03:45] Matt: Yeah. So let's, let's talk a little bit more about the problem. So, you know, if you could, if you could go beneath the surface a little and don't be afraid to be a little technical, this, this audience ranges the, you know, business to technical, I mean, what, what. What is the problem? I mean, [00:04:00] what, what, what can I not do today, or is very difficult to do today and why, and how are you solving it?

[00:04:07] Chetan: Sure. So I think the biggest, it's still obvious that we don't realize it, but the biggest limitation of our current computing model is we can only do computing in two places. We can either do it on the device like this laptop that I'm on or a phone, or we can do it in the data center. And there's a vast middle mile between these two places.

[00:04:27] That really doesn't do strategic stuff. It just

[00:04:30] Matt: It's a middle

[00:04:30] Chetan: one place, the middle thousand miles. Exactly. Yeah. And it doesn't do anything useful or interesting other than shuffling bits from one place to the other place. And, you know, we think that's a really interesting place to bring, not just computing, but what we call stateful computing because of the state part of this computing problem was very, very important.

[00:04:50] We've. As, as I looked at ads in the last three, four years, everybody got really excited about the edge. Because, especially the edge of the network where, you know, folks like [00:05:00] CDNs have historically operated because they could, they thought they could. I run computing over there in a more meaningful way where instead of just sort of, you know, inspecting packets and doing stateless things, we could potentially bring real applications over there, but applications need a robust and rich.

[00:05:16] Data infrastructure that allows you to, you know, depend on, on the data layer so that you can build sophisticated things with data. But the edge is very diffuse and distributed, unlike the cloud, right, which is in one location. And now suddenly you need to start thinking about how do you make this data layer reliable in a fully distributed way, across potentially hundreds of locations that that data needs to be available and to be served from.

[00:05:41] Matt: So let's, let's, let's try to, put that in the context of a real application, even if it's made up. but, but just something that, that, that I can understand. Oh, I see. That's something that would be very difficult to do today, or would be a bad user experience or a danger, dangerous experience or something like that.

[00:05:57] Help me, help me understand what new [00:06:00] use cases. And applications are enabled by doing this, this widely distributed, low latency data.

[00:06:06] Chetan: Maybe one of the most obvious ones is cyber security, right? As we've brought security to the pyramid or the network, they need to be able to detect bad actors threats in real time. And then potentially take that information about what is about act or what does it threat and propagate that to other control points in the network become very, very important.

[00:06:28] Historically, cybersecurity has been centralized. We have an appliance and that appliance sort of runs in its own little silo. Now, we're starting to talk about cyber security as a network model, where at the edge of the network, you start to do threat detection and prevention, and that requires it's a data platform and a underlying data substrate that can take that front information for example, and make it globally available instantly.

[00:06:52] And, you know, by instantly, I mean, at the speed of light and at this point, which the network electrical.

[00:06:57] Matt: So that's interesting globally. It's not just [00:07:00] distributed within a region or a nation it's distributed globally.

[00:07:05] Chetan: Exactly because that's our global and most applications are starting to become global. I mean, we live in a really interesting world where five, five people in a startup can build an application that potentially, you know, serves millions of users in a very quick, you know, point in time. And so we're, we're, we're, we've collapsed so much of infrastructure and made it so easy for people to build applications that security itself.

[00:07:29] Now it needs to be a pro programmatic, API that can be embedded in these types of applications and run globally as a part of that.

[00:07:36] Matt: Does your product look like a database?

[00:07:40] Chetan: Yeah, that's a great question. So we're, we're actually, infrastructure for developers. Our customers are, you know, backend developers and front end developers who plumbed their application into our global network and call it API is on us to essentially get data from one place to another place.

[00:07:57] And it's exposed as a database that looks [00:08:00] like a conventional database. So that programmatically, it doesn't change the paradigm of how you build. Edge or cross cloud applications for them.

[00:08:07] So

[00:08:07] Matt: is SQL based or is it some custom interface or what's the, what's the nature of the database.

[00:08:13] Chetan: it's no SQL database. you know, SQL is very transaction oriented. It's, it's great for it sort of capturing data from what's called an in place update stand point, you know, in SQL, for example, you know, you have say a customer called Joe. Their balance is 10 bucks now. They added five bucks to the account.

[00:08:30] Now the balance is 15, but the provenance of things like Joe's account balance was actually 10 an event called you know, credit $5 orders happen, all those types of very rich metadata, that form the basis for how you think about data flows and billing applications and taking advantage of that don't exist in the SQL world.

[00:08:48] So no SQL provides a much better programming model

[00:08:50] for

[00:08:51] those

[00:08:51] types of.

[00:08:51] Matt: there's this sort of a, what's the phrase to use eventually consistent?

[00:08:55] Chetan: this is where we really distinguish our approach [00:09:00] is the databases have always sort of lived in this mutually exclusive one or the other world of being strictly consistent or adventure to consistent. And if you're strictly consistent, you trade off latency and

[00:09:10] performance.

[00:09:11] Matt: wait for everything

[00:09:12] update

[00:09:12] Chetan: Exactly. Or in the event of the consistent world, which is really not really consistent, you know, you'll, you'll, you'll get the

[00:09:20] Matt: eventually, maybe consistent.

[00:09:21] Chetan: Eventually, maybe consistent. Exactly. In fact, the definition says at some point, theoretically, everything should line up, if you're

[00:09:28] Matt: When every, now it is done looking at its wounds.

[00:09:31] Chetan: Exactly, exactly. But you know, we'll we'll we'll we have done is essentially provided a program programmatic model where developers can choose between strict consistency and better forms of consistency.

[00:09:45] Matt: that's interesting. So it's actually a continuum, so I can, I can declare I want this data when it reaches a certain state. And that can be when it, when I know it's consistent or when, like I have the most recent read, or something.

[00:09:58] Chetan: Yeah. We [00:10:00] call it causal consistency. Yeah. And what it allows you to do is start to think about your problem and your data in two ways, what really needs strict consistency and just bound those things to trade off latency and performance for that chicken consistency. But everything else is really, you know, the nature of the data might be such that, you know, a, a slightly less consistent model than strict consistency works for you.

[00:10:22] And so you get better performance by getting a li by being willing to accept a little bit of staleness in the data. Well with guarantees that the data will always get consistent within 150 milliseconds, which is frankly the speed of the network from one end of the world to the other end. So our, our plans form basically wraps the latency between the two furthest points in our network as the bounding box

[00:10:46] Matt: Hmm. Yeah. That's interesting. It's interesting. And you know, and when I, when I think of, No distributed databases and, you know, data ingestion and things like that. I think of like Cassandra and Kafka [00:11:00] and things like that. And I, I, I think I heard that one of the distinctions in your businesses, you actually are a hosted platform.

[00:11:07] You actually are providing this as a service. Is that true?

[00:11:11] Chetan: That's right. So we're completely managed service. There's no infrastructure installed. There's no software. It's completely serverless. And it's, it's a combination of, four things, actually, Matt, it's not just a database because it's the database that sits at 175 edge locations around the world, physically 10 milliseconds, one way away from 80, 85% of the population that has a network connected device of some sort.

[00:11:39] So we've got physical proximity from where we're sitting with our pops. But a database that can actually span a data, an actual logical database across all hundred and 75. So if you want to reach, you know, 80% of the world with an application that you've built, you can build it on us and we'll actually locate the data.

[00:11:55] Then 125 places and everything will be kept in sync

[00:11:58] Matt: so I've, I've rep a [00:12:00] class you're replicating to all those,

[00:12:01] locations.

[00:12:02] Chetan: replicating data. but it does it smartly. It can replicate in the context of the location. It can replicate in the context of the application. You can also replicate in the context of data regulation and things that are coming in now in the form of PII management and, you know, data, sovereignty management and things like that.

[00:12:18] So the replication is very intelligent and how it places data at different locations. But it's not just a database in that you need to stop using your database and use us because that's, our database does have historically worked. It actually acts as a connector and a tear to existing databases in the cloud.

[00:12:36] So one of the things that does, yeah. One of the things that we've built is a connector and a protocol proxy for dynamo DB. So if you're a customer and you've built an application on dynamo, And dynamo is a great platform. It's one of the reasons people go to AWS, but by them has got some big, big weaknesses in the way it does replication.

[00:12:57] Global tables. For example, it lets you only replicate the [00:13:00] five regions in the, in the Amazon network. It's eventually consistent. It's slow. It's very latency high. You can use us as a way to essentially pull data from dynamo. Keep it at the edge of the server from there. And your application simply talks to us.

[00:13:14] And instead of talking to your dynamo server, we speak dynamo protocol natively. So your application thinks it's stuck to a dynamo in us West one. But guess what? It's actually talking to a macro meta in San Jose three, or it's talking about

[00:13:30] we approximately the dynamo.

[00:13:31] Matt: that is really in, are you doing kind of some sort of a, near real time sync with the dynamo DB?

[00:13:37] Chetan: Pretty much. There are different, again, you know, we, we look at consistency and sinking as a continuum, right. So you can define what kind of a consistency model you want. For your data back in dynamo, but the edge now becomes a plus and read, write mutate, query everything without actually going, pretty, you know, it's coming soon.

[00:13:57] Let me put it that way.

[00:13:58] Matt: Yeah. That's, that's, that's [00:14:00] a, that's a neat approach to make it, you know, you say developer friendly and I think of companies like Stripe, you know, we're just like would provide a service that, you know, it is enabling these, these things that normal people.

[00:14:11] Encounter with, but it's focused on a developer and I can just imagine a developer that's invested, you know, a year of his or her life in implementing a dynamo DB saying this is just the fastest path to getting my data distributed. That's a, that's a pretty neat

[00:14:24] model.

[00:14:24] Chetan: you know, it's, and it's really simple because if the customer simply takes the developer, takes an application and points that now to will the edge running a macro Matta. And you see the, you know, the average latency drop for a full round trip. That's stateful. You're going to query dynamo and get a bunch of data.

[00:14:41] Bring it back anywhere from four 50 to 500 milliseconds per round trip on that animal down to under 50 milliseconds for the full, what we call P 99th percentile. So 99% of your requests will be under 50 milliseconds guaranteed. And that changes the game from a user experience standpoint. For most [00:15:00] applications, it opens up now the bleeder do more in a smaller unit of time.

[00:15:04] And you know, now you start to see those low latency applications and ad serving and e-commerce and, you know, recommendations using AI out. Right. Really find the ideal place to open up to their full potential.

[00:15:16] Matt: Yeah. And, and do you see, so, you know, you think about how, how the internet is evolving, right. And, there's, companies like yours and sort of pioneered by the CDNs. have placed,  I call them servers for the sake of some simply please compute it equipment servers in regional data centers. So data centers that serve, you know, a a hundred millisecond radius as opposed to a 500 millisecond radius or 50 millisecond radius.

[00:15:42] Right. And it sounds like, like today you're in those types of locations. but as we know, there's actually, you know, a number of hops and, between there and the end device, including the last mile network, but all the way, all the up to that access network. And so, I'm imagining that that as [00:16:00] customers demanding, even lower latencies from the database, that you'll be able to deliver on that by simply moving your servers farther out to the edge.

[00:16:09] Okay. And then, and then. Yeah. Talk to me a little about that. And then, then just let me just finish the question cause I think it's related. And then do you also see a version of your, your services moving? sort of on prem in the way that like Amazon outpost or Azure stack, are you, do you see like bridging that last mile with your data?

[00:16:28] Chetan: That's a great question. So. W w today we sit pretty much where the CDN said it's the same career tells and, you know, hearing points that CDNs are at, we're now moving through a couple of really interesting telecomm partnerships, right onto the 5G ramp. And that now puts us, you know, once five days out,

[00:16:50] Matt: Yeah. When you say on the 5G ran, do you mean running your workload on a, an Erickson or Nokia appliance? Or do you mean running on a [00:17:00] machine? That's in the

[00:17:00] next rack? Yeah. Okay. And like using the, using the MEC interface.

[00:17:13]

[00:17:13] Chetan: but Matt one, one bit. I just wanted to finish the part on the product because there's database and then there's something that's like, can ask this and then there's the computer part.

[00:17:24] Matt: okay. So Chetan, this, this, this, you've built this, this global fabric for distributing data. how do you actually productize it? How do you wrap it into a product? How do customers consume it? Can you tell me a little bit more about that?

[00:17:37] Chetan: Yeah. And you know, when, when do were gonna originally envisioned the business, we thought we're going to sort of, this was going to be the cure for cancer, for data, you know, this next Neal, greatest database ever. You'll never going to need anything after this. You know, but, and, and, you know, and reality hits when you go to market and start talking to customers.

[00:17:58] So as we talk to [00:18:00] customers, they love the idea of low latency in global distribution through a deal fabric. But you know, they've already got databases. They've already invested lots of money in applications that use X, Y, and Z database. And so that's where we started realizing that there were two fundamental pieces missing.

[00:18:15] One part of it was that data was not just sort of the data that sat in databases, but now the world was moving towards more of this event driven model, where instead of clients or applications where the client would, you know, ask the server to do something, we moved into an event driven model where. The client was sending events to a server and the server would evaluate the event and do specify specific things based on the type of things, you know, the event described.

[00:18:42] For example, when you open up your Uber application on your phone, that's not a client server app where they. Uber mobile app is telling the server, Hey, Matt is now online. Hey, you know, update database to say Matt requesting a right. It's more like Matt is online. Madison location matters walking [00:19:00] in this particular direction.

[00:19:01] This is a stream of events going that. Then finally you click the button and requesting an over, and there's a, you know, event that says, request them, requesting a ride. And, you know, here's all the details packed into that event. And so this is sort of the underpinning is the idea of this architecture is the data Lake.

[00:19:16] You essentially stream events into a data Lake and you build a pipeline that evaluates all these events and in real time takes actions on the same event. Right? So an event like matters online, trigger many things like, you know, find potential. right. A drivers were in the area and so on, you know, what it is mad at Uber gold customer, you know, and how does that potentially mean, you know, in terms of what kind of a ride that'd be potentially.

[00:19:42] So it's all parallel realtime concurrent. And so this event driven architecture is really important. So the second piece that we built into the product is a stream processing system. For real time event driven applications. It's a messaging platform, but with a compute engine built into the messaging [00:20:00] platform, it's we call it eventing functions.

[00:20:02] It's like AWS Lambda, but for events, every time an event happens, you can fire off a very low latency, function that in evaluate that event and take action on it in real time. Right?

[00:20:16] Matt: Right, but, but it has access to a stateful database.

[00:20:20] Chetan: access to a stateful database. There you go.

[00:20:21] And that's the magic right now. All those hard problems have gone, because you can actually build. Yeah. And, and this really opened up just so many use cases for us. A couple of examples, one of the largest. infrastructure, software providers for the cloud in the world is our customer. And they built a real time, workload management and monitoring system on top of fuzz that today, you know, monitors about a hundred thousand workloads across the world in different clouds, in different locations and regions in real time.

[00:20:52] So each of these workloads has a little agent that's transmitting everything

[00:20:57] from real time.

[00:20:58] Matt: It's not even kind of an end user use case. [00:21:00] It's an infrastructure monitoring use

[00:21:01] Chetan: It's a machine, a machine use case. It's what we've done before, which was shipping logs.

[00:21:05] But now the log is a rich event stream. It's no longer just this tar

[00:21:10] Matt: it's triggering, is it triggering an audit automated actions

[00:21:13] Chetan: exactly. So we're evaluating everything from security information of performance of the workload. And now there's AIML running on that edge location, trying to predict if the workload is at risk of failing or under performing.

[00:21:25] So he preventive action can be taken. For some of these critical workloads.

[00:21:30] Matt: Yeah. And, and the, low latency, distributed stateful data is what it really makes that super powerful. Yeah. I totally get that because you think about the applications for serverless functions today. you know, whether like, you know, CloudFlare workers or Lambda functions, that's one of the big challenges.

[00:21:47] I mean, even look at. Building a service based application on top of Kubernetes, like stateful data is tough, which you wouldn't think. So it seems like the most basic problem, computer science, which is like, we gotta [00:22:00] retain the data.

[00:22:06] Chetan: Yep.

[00:23:00] [00:22:59] Matt: Yeah. So just, just, just, so, so let's talk about some other use cases.

[00:23:03] Chetan: So one of the other really interesting ones is we've got a security vendors, cyber security vendor that does network threat detection. They've got, you know, Denzel thousands of plant appliances in lots and lots of customer sites. And they're collecting all this network threat information in real time.

[00:23:20] What they used to do was sent all of that into a giant data Lake on the cloud. And, you know, it was sort of this post-process model, all of this raw sewage of telemetric, you know, so to speak has to land in the data Lake. And then they run a lot of back style computing, slice and dice that their time to insight, which is really a big deal, right.

[00:23:40] Is, you know, we're talking seconds and minutes a week here before, you know, threats are actually

[00:23:45] Matt: Yeah.

[00:23:45] Chetan: We'll just put it down and go. Exactly. And you know, what ends up happening with us is now the adjective. It was a place where you filter all of this data. And in the, in the security business, you were only interested in anomalies. You're not [00:24:00] interested in normal patterns of data. So everything that looks normalized is discarded. And the 1% of interesting patterns that need to be actuated on that need to be potentially processed further, only those things, you know, either go back into the cloud or get actuated from the edge itself.

[00:24:14] So if you want to block a particular IP address or something like that, Instant block upgrade the other thousands of appliances in 150 milliseconds flat. And, you know, you've solved sort of a problem that historically would have cost these guys millions of dollars. And they're going at three eight API calls using our platforms.

[00:24:29] So, you know, the combination of stateful data, venture of an architecture on top of that for streams and messages. Yeah. Along with eventing functions, which now provide stateful computation. On data at rest, as well as the data motion, very, very powerful for building, you know, truly, applications that they can vantage of the edge.

[00:24:48] Matt: Now, are there any, are there any sort of, you know, end user, either business end user or consumer end user applications that, are being implemented your platform or that you've, you've had [00:25:00] advanced discussions about and feel as a, as a pretty, you're pretty confident about the use case.

[00:25:04] Chetan: Yeah. You know, we've got basically what we call here now, problems that customers have, except for my telecom 5g customers who are sort of, you're trying to intercept a three or a five year horizon from an end stage standpoint. And everybody else is trying to solve problems there that are burning right now.

[00:25:22] We've got one of what is a very large e-commerce provider in the world. They have real challenge, synchronizing, you know, some simple things. Here's an example. If you were shopping online during the first few days of COVID, when we ran out of everything and people were, you know, outside with pitchforks and torches outside Costco, right?

[00:25:42] You remember those days? it feels like an ancient time, but it was only a few months back. You went online, you went to Instacart, you went to costco.com. You ordered things and you got a barrage of emails saying populate product, delayed product delayed. and that was because the inventory systems in the region and the look and, and the, and the set, the [00:26:00] store weren't synchronized with the eCommerce platform, it's eventually consistent and things.

[00:26:07] Like NSS, et cetera, don't provide a CAFCA don't provide the consistency model that when data changes in one location, you actually know that all the other places that, that data is potentially consumed has also been updated to have the same consistency. So you've got this eCommerce company that's basically solving what is a billion dollar, a year problem, and synchronizing data across locations and connecting, you know, their backend systems to their storefront front end systems.

[00:26:35] Matt: Yeah, that's interesting. You know, in a previous life I ran an eCommerce company and one of the challenges we had is in presenting the final cart for checkout, we had to call a bunch of third party API. So we had to calculate sales tax. By calling Avalara, we had to calculate shipping by calling ups. We do all these and just in those, you know, ones of seconds that it took to call the API and assemble all the results we had card abandonment.

[00:27:00] [00:26:59] And so I could see those companies saying, Hey, we'd like to reimplement on the macro metal platform so that our customers can make calls to us. And we can deliver that in a region and we can deliver the answer in that region. That's a,

[00:27:14] Chetan: And there's another side to this problem Matt, which is, and I'll use ad matching as an example over here, you know, ad matching, you've got a hundred to 150 millisecond window, about a hundred, if you really want to do this well, Well, you need to fetch a cookie from the browser, from the user's browser, inspect the cookie.

[00:27:32] And based on, you know, how rich the cookie description is of what your behavior is. If it's telling you boring things like, you know, here's a male mid forties located in Redwood, Redwood shores, you know, California, that's not as interesting as saying this guy bought camping gear last week. And you know, now you can start to do very interesting things with ad matching.

[00:27:51] If you had a cookie that actually had that level of granular data challenges, you've got to go fetch the cookie. And unpack it and then go and essentially match it. Potential [00:28:00] advertisers do a bit in real time. Yep. And realtime, and then serve it. You've got a hundred seconds to do this. You can do distributed databases for this particular problem up until now.

[00:28:11] You've got to have build these silos of databases in each region. And it's hard, hard problem for these guys. And they really are sort of, you know, a hundred, 150 milliseconds. Now we've got an, a, an, an ad tech company. That's doing this with very rich, granular targeting at 15 minutes. That comes from, you know, P 99, type of, service.

[00:28:29] 99% of their customers can see a rich ad targeted at them, personalized contextually relevant, and highly actionable from the end user standpoint. Because we were able to serve that in 50 milliseconds and they can actually double the matching that they can do the same infrastructure now allows them to get to X more matching in the same unit of time.

[00:28:48]so, you know, these are all what I think are critical here now, problems

[00:28:52] Matt: could be twice the revenue.

[00:28:54] Chetan: more,

[00:28:55] Matt: Maybe more super interesting. Yeah. Because they're more highly specified and yeah. And maybe convert better. Yeah. That's [00:29:00] really interesting.

[00:29:00] Chetan: two X or three X more revenue. Yeah.

[00:29:02] Matt: yeah. So let's, let's talk, let's look a little bit under the hood. So, so, you know, when I think about your business, it sounds a little bit like a.

[00:29:09] Like a mini cloud provider. I mean, you can run compute and store data and do all this. You're going to realize you're not competing with Amazon. You complimentary to the, to the Amazon and the Azure, the Googles of the world. But, when I think about all of the physical infrastructure that a cloud provider, has, and I know you're a startup company and, there's maybe some funny news we could talk about, but.

[00:29:30] Like, how are you, how are you building a global network? Like, what are you putting in the field? Are you, building? What are you renting? What are you buying? Like, how does that actually, how does this thing actually come together?

[00:29:43] Chetan: Yeah. So, you know, we're a software company. First and foremost, that's our core competence building and running and operating the software platform. We don't do data centers, we can't build it. We don't have the cap, you know, the capitalization for that sort of thing. You guys, on the other end of vapor, you know, you guys are about [00:30:00] capitalized and understand that part of it.

[00:30:01] They know how to put capital to work in terms of actual data center and real estate and all of that fun stuff. We partner with cloud providers. And buy capacity from them in a virtual cloud model. We're kind of like the Virgin mobile of cloud in the sense that now you've had MV anos in the mobile space that never bought spectrum directly from, from the government they bought.

[00:30:23] Yeah. They bought, you know, a slice of the spectrum from different providers around the world network and did value added services on that. That's exactly our model.

[00:30:32] Matt: So the example that you used, you know, about very beginning of this talk of this, interview, when you were saying that, you know, we don't put our offices next to us, West and us East. I mean, that's one of the things that I point out, and I know this world is changing. you know, Amazon's, announced wavelengths and things like that, but today there are exactly two locations.

[00:30:49] I can spawn an easy to instance. so.

[00:30:57] Which cloud providers, or what type of cloud rider are [00:31:00] you actually building your infrastructure on? If you can't name

[00:31:03] Chetan: We use the big three because in certain areas they have actually data centers, very close to an urban region, you know, a big barrier. For example, you've got all the three who have, footprints, which are within 50 milliseconds off San Francisco and San

[00:31:19] Jose.

[00:31:19] Matt: Yeah. I mean, they have their, they have their CDN pops, but I didn't know that you could actually run intelligent workloads. How are you doing that?

[00:31:25] Chetan: AWS is Gilroy. For example, one of them has something around Gilroy. As an example, Google has Los Angeles. Right in the middle of the Los Angeles County, right. John beta center. So we used them. And then we coupled that with some really smart folks like DigitalOcean and Leno Leno, it's a plug for those guys cause they've got fantastic infrastructure.

[00:31:47] And so we, we, couple, we plumbed that in and then we go deeper into certain regions where actually coverage is not as deeply calculated by working with regional cloud service providers and telecom companies. And so in specific [00:32:00] parts of Asia pack, for example, where there's a huge growth, we're working with the telecom provider, the incumbent telecom provider over there and running, you know, co-location inside their data center, or if they have at the very minimum and I, as do a running OpenStack and stuff like that, we'll consume that from them.

[00:32:19] Matt: Now, how do you run? How do you run a workload on Amazon in Gilroy? Cause that's not an option on my, my AWS, you know, interface.

[00:32:29] Chetan: not sure if it's AWS that has a data center in Gilroy, but one of the three has

[00:32:34] Matt: So, so you, you just basically take the assets of every cloud provider that you have access to and you pick and choose based on locations where you can run it in. So if Google's in this location and Linode's in another location and Amazon's another location you'll run,

[00:32:50] Chetan: Yeah, your app. Exactly. We'll make the example I'll use. Have you, you're probably very familiar with this saying "servers are capital, not pets." Yeah. [00:33:00] Well, we're saying "clouds are capital, not pets." And the whole idea is, is that when you write to macrometa, macrometa then becomes that run time for you to schedule and orchestrate your app on any cloud, cross-cloud, edge or cloud.

[00:33:13] We'll figure out where the app and the data needs to be orchestrated, deployed and available. And it'll completely flatten the differences between all these different cloud providers for you. yeah, so that's kind of how we're doing it. Our customers don't see any of the providers underneath us. They just see our API.

[00:33:31] And read and write into that API. They see dynamo DB sample or dynamo mode as we call that feature. And, you know, that's actually running on G that's

[00:33:39] Matt: sense.

[00:33:40] Chetan: yeah, but that's actually might be a database that's emulating dynamo running on GCP or Linode. we also have something we call. Yeah. We also have something called Nessus mode, which emulates Kinesis.

[00:33:51] And then we have Lambda, which means you can take your Lambda or your container that's on EKS, CCS. And running around us and now that's running on anything else. So [00:34:00] for me, the edge , one of the hidden parts of the edge of that nobody really talks about is if you want multicloud, the edge is the place to do it because none of the cloud providers actually have any interest in multicloud.

[00:34:10] Why would they, they want you to run in their cloud, plumb into their APIs and, you know, try and lock you in. But the edge provides a really interesting place for us to really abstract and arbitrate workloads across any cloud provider based on location latency and, you know, requirements that the customer has for those applications.

[00:34:32] Matt: Yeah, that's interesting. We did a webinar , for LFedge. And one of the questions that came up  was, If I need to run a workload, you know, let's just say North America. And when I say North America, I mean, not just the US but Canada and Mexico. And, I want to run it in, you know, edge locations, across multiple carriers.

[00:34:56] So this was a 5 G question. And, and so the person said, look, that [00:35:00] means I need to, you know, I need to figure out how to  deploy in Canada, across multiple carriers in the U S across multiple carriers, you know, down there is there. So the Christmas question is, is there an, is there a, are there, are there projects where people are trying to, you know, federate these telco cloud resources?

[00:35:18]in my answer, as people are working on it from the bottom up and they're working on it from the top down, but it sounds like you've kind of.

[00:35:27] Chetan: Yeah. You know, cause we, we looked at the cloud and saw that as the right abstraction API and we bring that API and out of all these cloud providers, maybe one contrast with outposts, right? Outpost is giving you. Two and elastic block store GBS, basically in, in, you know, in, in, in this new form factor. What I post is not giving you though, is the platform that, and that's what really developers care about.

[00:35:52] Nobody really likes VMs and you know, all of that stuff, they want to write to a database as a service, you know, dynamo D [00:36:00] B. They want to consume a queue as a service SQS. They want to notification through SNS. They want to run their. In a model's on Sage maker. So the developer's job now really has become as a way to integrate and orchestrate across different third party services and built just the business logic that matters.

[00:36:17] And we all the undifferentiated lifting, you know, leave it at the cloud provider. What we're doing is now providing that model across any cloud and allowing developers to focus on building just their business logic without worrying about which cloud it runs on, which location it runs on and how. You know, hard problems and concurrent distributed programming like consistency and concurrency and latency are handled by providing a very deterministic model for that.

[00:36:41] Matt: Yeah. Yeah. That makes sense. for your services?

[00:36:47]Chetan: we're, we're, it's a very much, you know, a serverless model. Our customers pay us, in, in sort of a hybrid, context. They fundamentally pay for the amount of storage they're consuming on our network. Yup. [00:37:00] So you, when you sign up and you'll deploy an application, you pick regions of interests where you want your app to be available, your data to be available.

[00:37:08] And so you're paying fundamentally for the amount of storage that you're consuming and all of those locations. The second thing that you're paying for is the number of API calls that resolved in a data request, reading, writing, querying, the database, things like that. and then you pay, and into that API call, what we have done is we've rolled out all the hairy stuff like networking grids and grads fees, and we've also rolled up the function execution time into that.

[00:37:34] So essentially it's a very simple two dimensional model pay for the storage pay for the number of API calls. And those things roll up all the hard bits. So they're good proxies for everything else, we, we bundle it up into a subscription, which allows them to buy in some form of, you know, you got a little bit of rate checking.

[00:37:53] If you're buying the subscription from us. Because it comes with a fixed allocations of storage and some fixed allocations of [00:38:00] API calls. and you get a little bit of a discount if you buy an annual subscription as an example. So based, simple and easy for people to understand that it's, you know, you don't need to get, Corey Quinn to come and help you understand your bill

[00:38:12] Matt: Yeah, exactly. So, you know, you and I met, Oh, I'm going to say two and a half years ago, probably around the original state of the ed project, or maybe it was our kinetic edge Alliance, but it was, it was, it was a couple of a couple of years ago. And, you know, I feel like the industry has, has, you know, gone from infancy to like late teenage years and in two years, what.

[00:38:40] What do you see? What do you, what has changed the most? I mean, well, first of all, are you noticing that? Do you feel, do you feel like there's a sudden, you know, convergence, like the technology like yours is maturing infrastructure. Like, like my company's is being deployed. The cloud providers are paying attention, 5G is real, [00:39:00] sort of it's about what are you, what's the biggest change that you think is driving

[00:39:05]

[00:39:05]Chetan: maybe there are three different parts of it that are all rapidly maturing in parallel, and it's all converging towards some sort of a singularity that's going to create an explosion of value over here. That's how I kind of think about it. First part is capital. The capital is getting much smarter about this problem and why it matters.

[00:39:21] And that capital is sort of, you know, really getting deployed across the infrastructure layer as well as the smart software stacks that need to be able to build this. You know, in the first way we will start ups that got funded. You kind of had a lot of companies that were repurposing Kubernetes as a way to run applications on the edge without fundamentally realizing that this was a data problem and you need to solve the data problem.

[00:39:44] The Kubernetes is undifferentiated stuff. Anyone can do that. So you had a lot of startups and I think there were maybe two dozen startups that I was tracking on one point that all had a flavor of Kubernetes or containers or worse realization of some sort that ran a workload at the edge.

[00:40:02] [00:40:00] That was the most common solution. And I think there are good niches where you can start salts and valuable problems, but it's not, you can build a mall. There's no market around them. It's not a big addressable market. capital's got really smart now in that capital's are trying to capitalist, trying to fund big platforms that could potentially be the next AWS or the next Google and things like that.

[00:40:25] Some of the capital CS, mature companies like CloudFlare and Fastly as potentially, where that, you know, actually gets built out. But I think, you know, they have some advantages, but they have fundamentally big disadvantages there pops are built CDN and competence serving. Yeah. And you know, just at a very basic level, if I can dig a second, if you wanted to stateful data, you're right.

[00:40:46] Very memory intensive. And those guys are flash and storage intensive their architecture. So there's a big retrofit that has to happen in their architectures to actually enable stateful computing. But I think there's an opportunity to partner with there for folks like them and [00:41:00] companies like Macrometa that are trying to solve the data problem. So capital has gotten smart in that  way that they're picking platforms now that are comprehensively addressing both the data, as well as the compute side of the problem.

[00:41:10]and that's, you know, kind of, I think a lot of there was, there was a little bit of a Jurassic dinosaur event of the startup world in the last two years. Either companies build it away from the edge or they died because they couldn't get to the next set of milestones. But there was lots of great learnings, fundamentally great entrepreneurs who all came together and we, you know, we, and I think the state of the edge was a great, was one of the many great places where a lot of collaboration and idea exchange happened.

[00:41:36] So the capital is it happening? The second part of it is customers are getting smarter. That's the other part of it? I think when we started buckling the promise of the edge, two years or three years back, customers kind of scratch their head and said, I, it, I have a latency problem. you know, I don't really have that issue.

[00:41:49] I'm not a bank I'm not trying to do low for, you know, high frequency trading. What is fundamentally shifted now is that it's not, it's not seen as a latency problem, but it's, it's seen as a [00:42:00] revenue loss or, or, or, you know, something that's translating to real dollars and cents where it hurts for these customers.

[00:42:05] Either they're losing customers. Or they're not able to maintain customers. I've got SAS companies. They just simply want to use us in the trial of their application because, you know, 45% dropoff in SAS on first log on during the trial, because the is too slow. can meaningfully improve the performance of just that your conversions

[00:42:29] Matt: Yeah, that's a, that's a, that's a really good point. where, you know, it, I still think we, we are still largely in a space where we're solutions looking for problems, but some of the problems have become very clear and some of the solutions line up with those very nicely in a way that, that they didn't two and a half years ago.

[00:42:48] Chetan: they don't know Yeah. So the second part is sort of the customer is getting smart. the third part of it is a cloud, and, and this is a, this is actually [00:43:00] what's driving a lot of macrometa take up at this point in time.

[00:43:03]Third part of this is the ugly secret about the cloud is that it's easy to get in, but it's really hard to get out and it's not a secret, but the secret part is this. If you are a kid company that has say, for example, a hundred thousand monthly active users, and you've now got COVID and your applications really popular, and yeah, you've gone from a hundred thousand to 200 or 300,000 monthly active users.

[00:43:26] You would expect that the cost of the cloud would go up from one extra, two extra, maybe three acts and linear relationship with the, but the truth is it goes up. If you go from one to 200, the cost goes up five or seven X. Well fundamentally, because if you're using data underneath your platform, you've got to pay for all those, you know, read requests, units, and write requests units, and your architecture matters a lot in the cloud.

[00:43:48] So developers are fundamentally suddenly realizing that sloppy coding really costs you a lot of money in the cloud, because if you're not doing indexing right on dynamo DB, for example, Or, you [00:44:00] know, the way you're turning on auto scale, and the weight starts to spread your data out. You start a pay for read and write requests, which are fundamentally far more expensive than just the unit cost of the storage that you're consuming.

[00:44:11] Right. and so that's one part of the problem. The second part of it is most people when they write a cloud native app that come in from this monolith world and maybe watched a few things, a few smart people talk about, you know, transitioning to microservices, But they haven't built an application really with scalability in mind.

[00:44:28] And so they need to go and rearchitect the application. They need to fundamentally start thinking about, you know, scalability from a different standpoint. and that's going to cost you engineering costs and, you know, time and all of those pieces, what we're doing is essentially allowing customers to sort of really get an aspirin for that particular problem.

[00:44:45] We've got a a hundred performing application. You've got scalability and cost problems. Use us as a way to use the edge to offload the cloud. Now I've got an IOT customer, one of their it's an IOT platform that serves, things like smart [00:45:00] locks and, you know, stuff like that because people are standing at home.

[00:45:03] They're using, you know, it's a cloud based platform, right? Exactly. If people are staying home, they're using all these IOT devices at home more. And so going up three X, four X, five X, monthly active users, and their costs are going, you know, 10 X, 15 X, 19 X, you know, So the edge now becomes a place where you can soak, absorb, and process all of that at a fundamental, lower cost point than doing it in the cloud,

[00:45:29] Matt: That's really interesting because, I've always thought of the edge as being more expensive

[00:45:34] Chetan: There's an inversion of things. The edge is expensive in certain, you know, in certain parts, storage is more expensive on the edge. Compute can be more expensive. So you do need what I call an edge native architect, turn your platform. That's built for these types of things. And maybe I'll just double click into sort of, I'll go really deep just for 30 seconds over here, conventional [00:46:00] databases.

[00:46:00] Usually I've had tree data structures underneath them, and that makes it easy and simple to solve some problems with querying reading and updating, but fundamentally the bigger the tree, the more expensive to navigate it. And get to the data that you're interested in. The macro man architecture fundamentally thinks about the edge as a place to do transactions and IO processing very cheaply.

[00:46:22] So instead of a tree, it uses a log logs are simple structures, computationally, very cheap. You simply keep dumping data to them. You write to the head of the log. And you know, what we do at the background though, is collapsed the log into a tree, like structure and memory and query that, right? So there's a materialized view engine.

[00:46:40] I essentially shows you the state of the law in real time. Right. So that makes us about a hard times cheaper than using a conventional database to do the same type of thing at the edge. You said there are other tradeoffs in that the data sets can't be as big as in the cloud. You know, you've got infinite storage in the cloud and you've got [00:47:00] platforms like dynamo that allow you to build infinitely sized databases.

[00:47:04] Macrometa is not meant for those, you know, physical sizes of datasets. They're meant more for up to a petabyte of data. But if you're doing several petabytes into larger data sets, the edge is not meant for that. So the edge native architecture to me is something that fundamentally addresses IO, throughput, and latency at the cost of storage.

[00:47:23] Whereas there's the cloud native architecture really optimizes for cloud storage, footprint,

[00:47:31] Matt: Yeah. That's that's, that's super interesting. let's switch gears a little bit. Do you have, I mean, what's, what's macrometa's  relationship to open source. Do you use open source? Do you contribute to open source? How do you think about open source in the edge world?

[00:47:46]Chetan: we we've built our core technology completely on a proprietary secret sauce. you know, we use a particular technique for data distribution. That's now sort of becoming better and understood. It's called CRDT is conflict. We replicated data [00:48:00] types. It's a new way to think about data replication from a dose and solving some hard problems with consistency and reliability of data when you replicate it.

[00:48:09]and you know, the problem with CRDT keys was that the, the, the, the information was available for many years, lots of great papers or non it. but it was like programming an assembly line. If you want to do  first, you got to get through what was very, very tough Academy research level papers, and really understand the concept.

[00:48:27] And then you needed to build everything. You know, it's kind of like how God bakes a cake. First. He creates the big bang, creates, the universe, you know, all of that. And at some point you got wheat, it's almost like that what's CRDT is, you know, you've got to start building all the fundamental tools and, you know, start to use them.

[00:48:43] We wanted to use  CRDTs, but we wanted our customers to experience a regular database with all the magic of what CRDT has done and capsulated and hidden from them.

[00:48:51]So

[00:48:52] Matt: we

[00:49:00] [00:48:59] Chetan: wanted to provide a, a CRDT engine, but hide enough fuss gate and encapsulate all the complexity of CRDT obey from developers. So developers could use. No existing languages, frameworks, and techniques, and using a no SQL database that they're already familiar with with all of the responsibility of correctness latency and in consistency and concurrency delegated to our layer.

[00:49:29] So all of that stuff that deals with CRDTs that deals with reliable message delivery across hundreds of locations with low latency. All of that is proprietary. we use, you know, certain things on top of that from an open source standpoint for orchestrating our system use Kubernetes, for example, and we've heavily modified Kubernetes over there, but we're now starting to really spend a lot of time thinking.

[00:49:51] I think about parts of our stack that really should be open sourced. And would help drive the community and the ecosystem towards adopting [00:50:00] multi-region architectures, then driving. This is the new way to build applications. So, you know, I, so my comment is stay tuned. We're going to have some announcements about what we're open sourcing in our platform in 2021, because we think we've got to stop building single region apps by default, and instead start building multi region.

[00:50:18] Cross-region between core and cloud type of apps by default. And that might need someone like us to open up some parts of our platform and make it available for others to innovate on.

[00:50:30] Matt: so, so here to sort of tie this up in a nice bow, you know, one of the things that people like you and I, this sort of OGE of edge probably think about a lot. Certainly I do. And I suspect you do is, is, you know, if we have the opportunity to. Topple a couple of the dominoes, you know, faster than the others to accelerate the whole industry.

[00:50:51] I mean, what, what are the dominoes that you had toppled or what are you things that you'd like to see happen

[00:51:02] [00:51:00] Chetan: Oh, it's such a great question. there are two different ends of this problem, and I think the first part of this is. The telecom operators really need to supersize their  thinking. A lot of the, you know, a lot of them are very much still in a deer in the headlights phase phases, this old thing. now I've been fortunate that we're working with a handful of global telecom providers.

[00:51:24] One of our partners, customers is one of the biggest ones here in North America, and they've gone very deep. They started using us as an internal application platform first and now it's trying to expose some five use services on top of that. but in, in general, when you look at telecom partner, the easiest thing for them to do is think about 5G, you know, Oh, it's a faster pipe and we'll continue to do what we're doing.

[00:51:43] We're very familiar with that, you know, all of that, but if you're, if you're gonna end up just as a faster pipe in the cloud, all the monetization and value capture is going to happen, you know, on the cloud. It's the

[00:51:53] Matt: last time around

[00:51:54] Chetan: happened in that last time

[00:51:55] Matt: want that. Yeah.

[00:51:57] Chetan: but I think the tacos underestimate [00:52:00] the, how sophisticated the developer experience needs to be.

[00:52:03] So a lot of telco, exactly. A lot of telcos are thinking to themselves, hey you know i'm going to allow customers to deploy a Docker container on my network. The truth is who gives a shit. People don't want to use Docker containers. They want to use it. Yeah, they want to use, you know, smart, rich high level services and build apps faster.

[00:52:28] And so the, so I, I think the, you know, the place where they're trying to intercept the market as passed by already in four years, four years back, if you're not thinking serverless, if you're not thinking developer experience as a telecom operator, I think your big time screwed,

[00:52:42] so that's one part of it using open. Exactly. I was trying to not to use the word OpenStack, but yeah. Yeah, it does. So that's one part of the thing I think is if a domino could flip, which was somehow magically, they all woke up tomorrow and the bozo bit in their heads, it had flipped and understood this [00:53:00] part.

[00:53:00]to be a very different world in five years in terms of how applications will be built the block and the relevance of telcos in that world would be very strategic versus,

[00:53:10] Matt: Yeah, well, and they certainly have an opportunity be strategic, you know, it's you know, the, the, I, I understand where the telcos are coming from. Right? We're spending all this money investing in our network, infrastructure upgrades. We, we, we can't get clobbered by OTT anymore. We can't be dumped at pipes.

[00:53:28]but you know, to your point, like you very quickly honed in on, we have to make something into it. Developers want to use. And I think that that, that, that, you know, that DNA is not yet widespread in the telco industry. Just understanding how important it is. I mean, you can't hire any, you know, you can, you can hire developers all day long, but know how to use the major clouds and want to use the major clouds.

[00:53:52] can't find that many developers to want to, you know, do something bespoke.

[00:53:55] Chetan: Exactly.

[00:53:57] Matt: so that's, that's really interesting. Yeah. So you mentioned two things

[00:54:00] [00:54:00] Chetan: Yeah.

[00:54:01] Matt: there another one?

[00:54:02] Chetan: Maybe even three. I think the second place is CDNs.

[00:54:06] Matt: How so?

[00:54:06] Chetan: CDNs are either going to be roadkill or they're going to be the next big thing. And I think some platforms, a better position than others are, of shipping static data and just doing,

[00:54:18] Matt: Oh, they're all th that's not their business anymore. They're all have

[00:54:21] Chetan: anymore.

[00:54:22] Matt: if not, you know, sort of serverless workload products

[00:54:26] Chetan: exactly. And I think they're making the right directions, but when everyone's got the same VA JavaScript engine, you know, what's the differentiation, right.

[00:54:34] Matt: Well, and they don't have stateful distributed data.

[00:54:37] Chetan: they don't and maybe I'll

[00:54:40] yeah, they don't have stateful distributed data. Maybe the one prop I'll give is CloudFlare, I think, is really thinking about this the right way

[00:54:45] they've

[00:54:45] Matt: well, I mean, you could, you could, your platform could potentially run on top of CloudFlare or stack path or, you know, any of the CDN.

[00:54:52] and I'm not going to ask you to speculate, but I'm going to speculate. You might be a really nice acquisition target for a CDN. .

[00:54:59]  And [00:55:00] I'm teasing, but I think that, yeah, it, it really, what I'm, what I'm trying to underscore there is how strategically important, you know, your early focus on, on, on a really tough problem.

[00:55:10] Like I said, I mean, still, you know, distributing a database across racks in a data center is hard.

[00:55:18] Let alone distributed across racks, across multiple data centers

[00:55:22] globally.

[00:55:23] Chetan: And it is, it is, it is, it is a big, bad, hairy problem and benefit of 20 years of trying to get rid of the stock problem, frankly, you know,

[00:55:34] Matt: right?

[00:55:34] Chetan: very little about anything else in the world, but between my cofounder and I,

[00:55:38] Matt: year overnight success. I love it.

[00:55:40] Chetan: we know a lot about this little problem, but, you know, frankly, ask me about anything else.

[00:55:44] I'm pretty dumb about it. So that's the second piece of it. And I think the third part of this is. The developer ecosystem and, you know, developers are very iterative in expanding their mindset. I think there's a lot of, false woods that I [00:56:00] had on pack about my own thinking about developers in the last few years, they're not a monolithic group that all get excited about technology.

[00:56:06] Most of them, frankly, they just want to do a job as easily as possible, and they want their weekends back. you know, so, the, the evolution of the surface at which the program continues to evolve. And, you know, serverless seems to be, you know, a great intersection point, but serverless has is it's still very crude and primitive, in, in, in the way, obvious problems are not solvable inside the serverless model.

[00:56:31] And I think that that's sort of the third domino that needs to fall is that the tooling and the surety of the serverless framework, right? How you build applications using serverless. And more importantly to understand the implications that developers are very poor at, which is how their court actually, what does it cost to run your code?

[00:56:48] That's something that developers have never really had to deal with. Right? You kind of just said, Oh yeah, I need a server. That's X big. And w you know, Y amount of memory and. You know, and then, you know, Bob's [00:57:00] your uncle, that was the world we live in. And now suddenly you wrote a really bad piece of code and it's going to cost you an extra million dollars a year because you're screwed up.

[00:57:09] You're indexing, you know? So, so now suddenly there's this intersection and I'm going to steal that Simon. Wardley his view of this world. For development and financial operations, you know, you know, collide and out of that, you know, some sort of like new, type of a developer emerges was very, very,

[00:57:31] Matt: That is, that is really interesting. And it hurts my head a little bit, although I have had some discussions about, you know, one of the challenges that when you think a little abstractly about edge computing is you have many more constraints. You know, the idea of the infinitely, scalable cloud at. A very, very low latency location.

[00:57:47] Doesn't really. Comport because, you've got, you've got limited capacity. I mean, you know, you can only build data centers so, so big and you can only stick so many kilowatts of equipment in there. And so you've got definite constraints and we can see [00:58:00] a world where there's going to be congestion for those resources during peak times.

[00:58:04]and the demand on those resources is going to be real time. So you can't have a human sitting there saying, well, you know, I just won't run this workload between 5:00 PM and 8:00 PM. You're going to have to have a machine and probably a machine learning based auction bidding system where, where, where the developers probably you're right, is going to have to make a decision about how much am I willing to pay to run this workload?

[00:58:30] Now or at this latency or in this physical location. and we're going to have to build tooling to do that, but you're right. We're also going to have to build kind of a mindset that, that it's it's about. deploying code is deploying capital. That's really,

[00:58:44] it's a really interesting,

[00:58:46] thought processes,

[00:58:47] Chetan: put it so succinctly. I'm going to steal that Matt deploying code is deploying capital. It really is in the cloud world. And, you know, in, in a really obvious one,

[00:58:56] Matt: when you do get push push, you have to put a wait, what it's worth to you.

[00:59:00] [00:58:59] Chetan: Yeah, exactly. And I think that's actually an emerging area for VCs to look at, which is analyzing code to figure out what does it cost to run at scale

[00:59:07]you know, one of the most obvious things is developers need to start thinking about is can I write this in a way that it doesn't, it costs 20% cheaper or cost 30% cheaper, you know? and you know, to your point, I schedule this in New York at peak time? When I know that 7:00 AM to 9:00 AM is, you know, uh,

[00:59:25] is, is, is peak congestion

[00:59:30] Matt: Yeah. Or, or, or trading off the cost of customer service for delivering a less than ideal user experience because the cost of. Running that workload would have exceeded your threshold. I mean, that's, there's going to be some really interesting knobs to turn, you know, obviously not with, with life critical or safety, critical things, but you know, like a game, like a free game, you know, Pokemon go

[00:59:53] Chetan: Yeah,

[00:59:58] Matt: not going to pay more than nickel to run that [01:00:00] workload.

[01:00:09] Chetan: slab or something like that. Right.

[01:00:12] Matt: 11 year old would pay for better, better a pink speed on a fortnight. So

[01:00:16] Chetan: Yeah.

[01:00:17] Matt: would be how we monetize 5g is we just sell, sell upgraded pink speeds.

[01:00:22] Chetan: Micro, micro monetization, right.

[01:00:24] Matt: Exactly. Thank you for joining us. This has been a terrific conversation. I really appreciate you sharing all these details and, for being, you know, an early supporter of state of the edge, and, and just, a great, contributor to the community.

[01:00:39] Chetan: Thank you. Actually, I props to you, Matt and the folks at vapor. State of the edge is just a really special thing. I think it all started with the state of the ad. You guys actually. Named it and, you know, created a, community and allowed a lot of smart people start coming out and supporting each other.

[01:00:57] So for me, it all started with state of the edge, [01:01:00] up until then there really was no edge as such. So, you know, thank you for what would you have been doing the last couple of years?

[01:01:06] Thats awesome. Thanks Chetan