Over The Edge

The Past, Present, and Future of Edge Technology with Jonathan Seelig, Co-Founder and CEO of Ridge & Co-Founder of Akamai

Episode Summary

Today’s episode features an interview that took place live at the 2020 Edge Computing World Conference between Matt Trifiro and special guest Jonathan Seelig, Co-founder and CEO of Ridge and Co-Founder of Akamai Technologies. In this interview, Jonathan and Matt discuss the past, present, and future of edge technology, starting with the origins of Akamai at MIT in 1997, through to Jonathan’s latest venture at Ridge, where he is building the distributed cloud platform that will power the next generation of cloud native applications.

Episode Notes

Today’s episode features an interview that took place live at the 2020 Edge Computing World Conference between Matt Trifiro and special guest Jonathan Seelig, Co-founder and CEO of Ridge and Co-Founder of Akamai Technologies.

As a co-founder of Akamai, Jonathan is one of the true godfathers of edge computing, having enjoyed an extremely impressive two-plus decade career in technology as a founder, investor, and board member. 

In this interview, Jonathan and Matt discuss the past, present, and future of edge technology, starting with the origins of Akamai at MIT in 1997, through to Jonathan’s latest venture at Ridge, where he is building the distributed cloud platform that will power the next generation of cloud native applications.

Key Quotes

Sponsors

Over the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.

The featured sponsor of this episode of Over the Edge is Zenlayer. Improving user experience doesn't have to be complicated or expensive. Zenlayer helps you lower latency with on-demand edge services in over 150 PoPs around the world. Find out how you can improve your users' experience today at zenlayer.com/edge

Links

Connect with Matt on LinkedIn

Follow Jonathan on Twitter

Episode Transcription

[00:00:00] Matt: Hi everybody. Thank you, Gavin. This is Matt Trifiro, I'm the CMO of edge infrastructure company, Vapor IO, and co-chair of the Linux foundation state of the edge  project. And as Gavin mentioned, I'm also the host of over the edge, a weekly hour long interview style podcast on edge  computing and the future of the internet.

[00:00:15] You can find it at overtheedgepodcast.com and of course on all the major podcasting platforms, including iTunes and Spotify. So I want to give a quick shout out to my podcast sponsors Catchpoint netFoundry Ori industries, Equinix, Seagate, vapor IO, and Zenlayer through their generous contributions. We've been able to produce this labor of love.

[00:00:35] And today we're coming to you live from edge computing world. And I'm thrilled to be joined by Jonathan Seelig. Currently CEO and co-founder of Ridge, but also one of the co-founders of Akamai, which gives him true godfather status in the edge computing world. We're going to talk about Jonathan's legendary technology career, including the origins of Akamai at MIT in 1998 and will carry through [00:01:00] Jonathan's current venture at Ridge.

[00:01:02] We're going to cover the past present and future of edge technology. So pay attention. Hey Jonathan, how are you doing today?

[00:01:08] Jonathan: [00:01:08] I'm great. Thank you, Matt. Thank you very much for having me. It's fun to do this live and I'm very happy to be chatting. Yeah,

[00:01:14] Matt: [00:01:14] that's great. You know, one of the questions that I always like to ask people, because the answers are so interesting.

[00:01:19] And I think, you know, even though you're at MIT and have a physics undergrad you majored in business and you actually left to found Akamai. I won't spell it too much to the story, but how did you even get started in technology?

[00:01:31]Jonathan: [00:01:31] So I was always sort of technically minded when I went to college.

[00:01:35]I grew up in Vancouver, Canada, the West coast, but North and then I went to college at Stanford as an undergrad. And I was interested in technical disciplines but actually more sort of in the pure science. So I have a degree in physics and decided to pursue that as opposed to engineering.

[00:01:51] And you know, when I graduated from my undergrad degree, Realized that I probably wasn't smart enough to be a full on physicist. And so I should go in look [00:02:00] at practical things and see if there were places that some of the knowledge could be applied. And I got, just kind of pulled into the technology industry initially through the telecom industry.

[00:02:08] My first job out of college was with a company actually based out of Israel called ECI telecom. And I just loved the. The idea that we were building stuff that made networks work better. And that made, you know, at the time circuit switched phone calls just have a lot more kind of capacity on, on networks for, you know, for calls.

[00:02:26] It just felt like a really cool way to apply my technical interest in making, you know, making networks work better and allowing people to communicate. And so it's been, you know, at this point twenty-five plus years of that. So yeah.

[00:02:41]Matt: [00:02:41] When did you start your

[00:02:42] Jonathan: [00:02:42] business grade MIT? I've got to MIT in the summer of 1997 and that really just a few months after I got there, I was introduced.

[00:02:50] To my co-founder at Akamai, Danny Lewin and his faculty advisor at MIT, Tom Leighton. Who's our other co-founder in the company. And Tom and Danny had [00:03:00] been working on some technology for a little bit already in the lab at MIT. And in late 97, when I got to, to MIT. I started talking to them about how we might take some of those technologies and build a business out of it.

[00:03:11]So I got there in 97 and we really started working on  as a business in late 97 and kind of full on launched it in in 98.

[00:03:20] Matt: [00:03:20] W what does Akamai

[00:03:21] Jonathan: [00:03:21] mean?  is a Hawaiian one word that means intelligent and clever and the vernacular kind of a way that it's used the way it's used in casual conversation is it means.

[00:03:31] Cool. And so we liked the we liked a lot of the connotations of it. We were starting the business in the late nineties. And at the time it felt like every domain name was already taken and there were no names left for companies. And I guess that the kind of various different suffixes that we've given ourselves of the iOS and the CEO's and the tios and all of that stuff has maybe kind of expanded the name the namespace capacity, some but at the time we felt like we were kind of you know, [00:04:00] Scratching pretty hard to find something.

[00:04:02] And 20 something years later, that problem is, Oh, so much worse. So yeah, that's where the name came from.

[00:04:08] Matt: [00:04:08] Yeah that's super interesting. And you know as legend has it and correct me if I'm wrong, there was a Now at a hundred thousand dollar MIT business plan contest, but a $50,000 MIT business plan contest, you guys entered and you didn't win, which is interesting.

[00:04:21] Cause I went and looked at all the people that have won and I think Akamai by far is the most successful company. The group. Tell me about like those early days of you and your co-founders getting together and going down that path.

[00:04:34] Jonathan: [00:04:34] Yeah, well, so, so you're absolutely right. The competition at the time was the MIT 50 K competition.

[00:04:40]It had just been upgraded that year from from 25. Okay. So it was already kind of, you know, it was halfway to, I guess, where it is today. And we did not win the 50 K competition. In fact, we were finalists, meaning that we were in the final six. And we were not in the top three, so we were either four or five or six pick your you know, [00:05:00] pick your ranking for us.

[00:05:01]And you know, th the the competition had a lot of different diverse kind of areas and fields as part of it. When we didn't win the MIT 50 K competition, we were, you know, kind of crest fallen and we were kind of upset that we hadn't done as well as we would've liked. And. In the end, there were some fundamental kind of business model questions that it caused us to answer and to build a really, you know, a much better business at Akamai than perhaps we would have.

[00:05:28] Had we kind of won outright. So, so you know that the blessing of the skin knee, I guess, is a, you know, is that right? But I

[00:05:36] Matt: [00:05:36] think for all intents and purposes, you did end up winning.

[00:05:38] Jonathan: [00:05:38] So that's really great. You build a great business. Optimize a Tara is a fantastic company. I'm super proud of what we were able to build there.

[00:05:44] I'm very proud of what we built as we went along and I'm incredibly proud. Of where the company is today. It's a tremendous business. But you know, you asked what we were trying to do with the, at the beginning of this. And, you know, really what we were trying to do initially, was to find something cool to do with some technology that Danny and Tom had [00:06:00] developed together with Danny's master's thesis and algorithm called consistent hashing.

[00:06:03] And, you know, there's a lot of, we, we can get really far down into the weeds on this. But the long and the short of it is. That Tom and Danny, both brilliant computer scientists from the algorithms group at the lab for computer science at MIT, had some ideas about how to allow very highly distributed networks to be able to function with.

[00:06:24]Very imperfect information and very different information from place to place about what was in fact in the network. So the idea of hashing functions, how you decided, where you put content and where you go and retrieve content, the idea of a lot of routing on the, you know, on the internet, all of these, the internet works because you don't need to know what the entire network looks like.

[00:06:45] From one place. Yeah. And you know, that was a, sort of a new topology in the late nineties when we were working on this and Tom and Danny had some amazing technology ideas around this. What we figured out at Akamai was that it was actually something very similar to, I think [00:07:00] what edge computing is starting to look like now, and certainly what we're trying to do now at Ridge, what we figured out at Akamai was that.

[00:07:07] Even though there were  very big network providers out there AT&T, WorldCom, BT. They were always going to be way bigger than little Akamai was going to be. But if you were a content owner, if you were a content provider, you cared about getting to every single user in the world very effectively.

[00:07:29] And as enormous as AT&T were. They only got you close to a relatively small fraction of the users in the world. And if you really wanted to get close to all of those users in the world as a content owner, you should probably be thinking about putting your content on AT&T but also on MTT and also on China telecom, and also on BT, it should be in lots and lots of different places.

[00:07:52] From a network topology standpoint, as well as probably a geography standpoint and that idea of distributed infrastructure [00:08:00] and the enormous advantage that we could give to content providers through distributed infrastructure was, I guess I w you know, I would say sort of one of the fundamental innovations that we brought to the table at Akamai, it was amazing to go to a customer at the very beginning when we had.

[00:08:17] You know, single digits, millions of dollars of capital raised. And we were, you know, 12 people in the company  or whatever we were, you know, we were tiny and going to a customer like CNN, or like the New York times, or like Disney and saying with a straight face, we are bigger than WorldCom. As far as you're concerned and you get this look from people like, are you crazy?

[00:08:42] And we say no, we're bigger than them for in the way that it matters. We're sitting on five different networks around the world and they are by definition on one. And I'm going to get you closer to a lot of users than you would ever be able to get to by working with just one provider and that fundamental change [00:09:00] to how content owners think about content delivery.

[00:09:04] I would say as sort of the biggest innovation is the fundamental innovation that came out of Akamai and what made it such a, an exciting company to be a part of in the early days. It's what made telling the story of what we were building. So fun and so exciting because you had these aha moments of a content.

[00:09:20] Yeah. What a great

[00:09:21] Matt: [00:09:21] reaction to a value proposition as a

[00:09:23] Jonathan: [00:09:23] startup. Yeah. Yeah. It was so fun, right. To go and talk to these big content owners and say, Hey, people try to get to your content from Japan. Well, let me tell you how good, you know, again, I'm picking on AT&T. But if AT&T's network is in Japan, it's not as bad, you know, who's is really good.

[00:09:39] MTTs and I have a partnership with them. And so this idea of overlay, infrastructure of being able to build something on top of other providers and of thinking about, you know, the early the, you know, we used to use the term edge. For for where we were going to deliver content from, we would say to content providers, we are going to get your content closer to the edge of the [00:10:00] network.

[00:10:00] One of the things that, to me, that's so interesting. And I hope Matt that you and I talk about this a little bit later on is kind of definitionally. What is edge? Yeah,

[00:10:10]Matt: [00:10:10] Let's dive right into that. Because when I was doing the research for the first date of the edge report, I went looking to try to answer this question, and it is kind of an impossible question to answer because there are lots of different edges.

[00:10:20] So I think that's part of it, but some of the earliest references to edge computing that I could find in the literature. Was in some of the early writings that Danny and others on the founding team and technical team did at Akamai. Mean, Akamai certainly invented the content delivery network, which was the origins of the business.

[00:10:36] Now Akamai does quite a bit of other things. There was a content delivery network and potentially also invented the term edge computing, although maybe not, but it's certainly one of the earliest references. In fact, your first product. As far as I could tell it was called edge suite back

[00:10:50] Jonathan: [00:10:50] in 1998.

[00:10:51] Yes, sir. So a very early product at Akamai was called edge suite. Our first product was actually called a free-flow and our next product was called first point. It was a load [00:11:00] balancing product a mapping product. But the first capability that we had in the network beyond just sort of bit delivery.

[00:11:07] So beyond just this very highly distributed kind of you know, disc plus network interface card capability that we had, where we could actually do some amount of page assembly and have you know pro HTML processing and dynamic webpages and things like that. The very first place that we implemented that was in a product that was called edge suite.

[00:11:27] Subsequently edge suite became sort of the master pro the high level product name that a lot of other things kind of fell, it fell under. So yes, we absolutely use that term very early. Even before we had a product that had the word edge in it, we did talk to content providers about trying to get their content to the edge of the network.

[00:11:43] And in that particular circumstance, definitionally, what we meant by edge was we meant. Close to end users so that we can give you low latency. And in most circumstances, more importantly, high throughput to that end [00:12:00] user. Hm, that was sort of definitional you know yeah, I mean, back in

[00:12:03] Matt: [00:12:03] 1998, you tried to watch a high definition movie video.

[00:12:07] You'd get a buffer. I fact, even large JPEGs you'd have to wait for. Absolutely. So yeah, it definitely, I definitely see the advantage of that. So a question is. If the primary goal of Optima in the early days was just delivering stored content. You know what you said? You know, in the early days it was a network interface card in a district and some clever software.

[00:12:29]Why not just build bigger pipes? What why place. The content out of the edge. Yeah.

[00:12:34] Jonathan: [00:12:34] That's a great question. So yeah, you know, the way that that traffic works and the traffic routing works and that, you know, caching ends up working is, you know, it's sort of a, it's saying, why don't we just, why don't we you know, fix freeway congestion by just building bigger and bigger freeways.

[00:12:50]Tried that it turns out it hasn't worked so well. Right. So it turns out that you in fact, really need to be thinking about this architecture quite holistically. [00:13:00] And it's not just the core of the network that matters. It's also the connectivity at the edge, and it's the way that someone from their home or from their office is being aggregated before they get to.

[00:13:11] You know a node to then go upstream. And so the idea of caching was not a new idea. When we came in to being at Akamai, ISPs themselves and network operators themselves would use caches to try to reduce the amount of traffic that they had coming, that they needed to bring into their network. And the goal of reducing traffic was, you know, important because first of all, if you were an sort of local ISP, You're probably buying that connectivity from somebody upstream.

[00:13:42] And so if I can buy less of it, I bring my costs down for running my business. That's good. You know, and by definition, if I was going to put you that much closer to to the content I was going to give you better performance. And so, you know, there were certainly improvements to the infrastructure that happened because the core [00:14:00] was getting built out so aggressively in the late nineties and early two thousands, that absolutely helped.

[00:14:05] But at the end of the day, you know, when we've built bigger freeways, we've also needed bigger freeway exits and big arrests, you know, rest stops  and better, you know, traffic, light management and the main streets that come off of those freeways. I've probably totally butchered the analogy at this point, but, you know, to just fix the core is never you know, it doesn't solve the problem in these kinds of systems.

[00:14:25] Yeah. And for people

[00:14:26] Matt: [00:14:26] who don't understand the scale of Akamai reading from  the Wikipedia, which is probably five years out of date Wikipedia claims that Akamai has over 275,000. Points of presence of servers. Well, probably more servers, but points of presence in 136 countries. So I imagine it's bigger.

[00:14:44] So it truly is a global operation. And so this, you know, I can sort of imagine

[00:14:51] Jonathan: [00:14:51] probably true, you know, quarter a quarter of a million ish servers is, you know, it's probably about right. I think the number that, that when you think about why couldn't we just build the [00:15:00] core bigger, the number that's truly staggering is that on a daily basis, the aggregate traffic being pushed off of the Akamai network at peak is over a hundred terabits per second.

[00:15:12] Right? Let that sink in for a second and then think about, okay. Can I build a data center? That's going to push a hundred terabytes a second out the door. No, of course you can't. Yeah.

[00:15:24] Matt: [00:15:24] That's that's. That's quite it's so, so, okay. So let's go back to the definition of edge, cause I sense that maybe it's changed.

[00:15:30] So what is your definition of edge today? How do you, when someone asks you like Jonathan, where's the edge? Like how do you answer that

[00:15:36] Jonathan: [00:15:36] question? Yeah. So, you know it's very application dependent and very situation dependent. I'll tell you just a quick little, sort of old story about, about edge when we started using the term, the edge of the network and moving content to the edge of the network and delivering content from the edge of the network.

[00:15:51]You know, we felt like we come up with a pretty cool kind of idea. I'm not, I don't know if we were the first or not. We certainly. It felt like it when we did it. [00:16:00] But you know, I recall a year or so later, walking down, down the street in New York city with my co-founder Danny. And. At and T had just described, had just put sort of the marketing term edge around like two G or 3g or S you know some G from Oh, from quite a while ago.

[00:16:19]And they were calling that ed at and T edge. That was the name of their high-speed network at the time. And so there was a bus stop in New York city with this poster, you know, bus stop poster with a. Fancy, whatever it was, Motorola, you know, raised her cell phone or whatever it was. And you know and probably even before maybe a star tack cell phone and you know, and at, and T edge on it.

[00:16:39] And I remember walking past this bus stop and pointing to it and saying, Oh my God, Danny, we gotta, how did they're using our word. And so it clearly things, nothing at all, like, When they're using it in in, in the context of this, you know, digital network that they were building out. And so the reason that I [00:17:00] tell that little story is because I think that this idea of.

[00:17:04] Edge computing. And I don't want to despair as the word when you and I are speaking at a conference called edge computing world. You know, but the word is super situational. It's very much dependent on the application on the use case, on the geography, on the industry type that we're talking about and the way that I think about edge.

[00:17:22]First of all, I try to not use the term all that much. Second of all. When we talk, when people say, Hey, how do you define edge computing? I say, look at the application that people are trying to deploy, understand what level of distribution it needs. And by distribution, I mean, you know sort of technical definitional level, how distributed does it need to be?

[00:17:45] How many different places does it need to reside and how do you define. The places where it needs to live in order to do what it needs to do effectively, that for that application is the edge. So one of the ways that I also think about it, that was [00:18:00] a sort of a terrible way to explain it. Maybe you can edit this out in post for the people who aren't live that the the th the, one of the other ways that I sort of think about it is I think about it a little bit, like going to the planetarium with kids and seeing the exhibition of the orders of magnitude, where they sort of tend to first, second, third, fourth, you know, here's a, here's an atom, here's a molecule.

[00:18:19] Here's a, all the way until you get out to the solar system in the universe. Well, you know, there are applications out there that are going to need tens of milliseconds of latency for every user in order to function well, that's the type of application that isn't going to be. Well-supported by the centralized infrastructure that most cloud computing applications use today.

[00:18:40] Matt: [00:18:40] Yeah. Or even the next door to magnitude tens of microseconds for a virtualized

[00:18:44] Jonathan: [00:18:44] 5g network. Absolutely. Absolutely. So, so move from, remove from tens of milliseconds, right? To single milliseconds. Well, now you're going to need a lot more places. You are going to have a lot more notes as your edge.

[00:18:58] If you want to get into single [00:19:00] digit milliseconds, you want to get below milliseconds. You're definitely talking about a lot of different geographies. If you're okay with hundreds of milliseconds. Centralized architectures that are out there today. We'll probably do that for you for, you know, a lot of the world's population.

[00:19:16] And so to me, there's a little bit of this kind of orders of magnitude, you know, definitional question that happens. And you know, what you guys are building at vapor is going to absolutely help in that kind of tens of milliseconds, you know, world and beyond. Right. But you guys are going to go way past that into these microsecond type applications.

[00:19:34] But even just being able to get to guarantee and assure people, tens of milliseconds of latency is valuable because centralized infrastructure can't assure people of that.

[00:19:44] Matt: [00:19:44] Yeah. You know, and today we seem to be in this, you know, second wave or golden age of edge computing, certainly, you know, from a Google trends perspective it's hot and has a whole show about it back then, you know, there was a company, but there wasn't a whole show about it.

[00:19:56] And, you know I spent a lot of time thinking about what are the causes of this? You know, I, [00:20:00] where the state of the edge hat I'm often on this podcast. And one of the answers that, that I feel is most compelling to me is w. This is the first time in human history where we're moving from a, an internet that has been primarily humans talking to machines.

[00:20:18] So you talk about what problem are you solving at Akamai? What was humans requesting content? I was watching a video or looking at a webpage that we're moving to a world where. Primarily machines are going to be talking to machines and machines are on 24 seven. They generate copious amounts of data.

[00:20:32]And they, when they talk to each other, they want to do it and, you know, nanoseconds and microseconds and milliseconds. But that's just one of the reasons I want from your perspective, what's going on now, that's causing  this new energy around edge computing.

[00:20:49]Jonathan: [00:20:49] So I would you know, I think that there are a few different things that are happening right now that are driving this need or this interest in edge computing.

[00:20:58]One is that we are [00:21:00] seeing a bunch of low latency applications that care about latency that are being developed as. Cloud native applications. So applications that are being developed as brand new applications, they know that they're not going to be running on owned and operated infrastructure, but rather they will want to run on, you know, on a cloud.

[00:21:19]And at the same time, those applications are for performance reasons, very sensitive to latency and know that they don't get the level of customer satisfaction that they that they strive for if they can't achieve that. We've certainly seen, you know all of us and our families and our children using more and more of these applications through this pandemic.

[00:21:39] Right. We are on a live. Interactions with people on zoom and WebEx and others, you know, way more than we were previously. Kids run video games a lot more as part of their social engagement and, you know the sort of any, anybody with kids who play any of these kind of games, you know the hearing I'm lagging being screamed from the kids' bedroom, right.

[00:21:59] Is [00:22:00] is something that we all have experience with. And so. You know, as we see more of these applications that care about C get developed as edge native application, as a cloud native applications, they are going to, by definition also want some something that is, you know, the edge native, as you said, the machine to machine piece and the desire of the people who are building those systems, be the IOT systems, or be them, you know, any systems that are looking for any kind of autonomous control where latency starts to matter a lot.

[00:22:29] Those things are starting to, you know to creep in, to show up in a very active way in today's world. One of the things that I think has also been really interesting in the last couple of years, since, you know, it's really just, since we started Ridge, we've seen a huge uptick in the quantity of data sovereignty and data residency regulations that are out there in the world.

[00:22:51]There are more and more countries that are, I'm starting to really care about where they're populations, you know, social insurance [00:23:00] numbers and social security numbers end up trans, you know, ended up moving to where the applications that are that those people are using are in fact running.

[00:23:07] And so we're seeing, I think a big uptick in the in concerns around data privacy and data sovereignty. And that though it's not really a performance you know, driver for edge applications is still a huge driver for edge. And that's why I said sort of at the beginning, it's a definitional question, right?

[00:23:27] If you just think that edge is out there as a concept, because it makes stuff fast. Well, okay. That's true. For some things, there are certain things that really care about fast, but sometimes you just need it to be in a particular place because that's what the law says. It's gotta be here or because that's what your industry best practices are.

[00:23:47] It can't move out of here. And so all of those things become edge drivers. I'm not, I think you're on mute right now. I'm at.

[00:24:01] [00:24:00] Matt: [00:24:01] Yeah, cause I do have some video game playing kids that are complaining about the lag and the background from time to time. Yeah, so, so what I was gonna, I was gonna say is you know, there's been a bit of backlash on latency over the last six months. I mean, you know, I would say that I've legitimately be an, been an edge since like 2017 and in 2017, we all talked about latency because clearly.

[00:24:22] That is one of the huge benefits of edge computing. Yeah. But the. The use cases that, that demand low latency are not as fast to emerge or some of these other use cases. I mean, they're there and they're definitely coming. Yeah. So data sovereign. Yeah. She being one of them, one of the original value props for Akamai is one of them just reducing congestion.

[00:24:42] I mean, if I can keep my traffic in, in, you know, I can, if I can you know, offload my traffic in a local zone and have it processed and. I don't have to pay a transit provider or if I'm a private enterprise and, you know, I do, I really want to pay for the, you know the egress from some [00:25:00] cloud provider that I sent my data to for free, you know?

[00:25:02]So I think you're right. And then there's the data sovereignty issues, right? And they're even becoming, you know, data sovereignty issues are even becoming more complex. Like you mentioned a country-level which I think if you look what's happening in Europe, that's certainly a case, but even in the United States, you're seeing it on sometimes on a state level.

[00:25:16] You're certainly seeing discussions about it. And even on a city level, and you're talking about like hospital data, you know, the hospital's like, well, I don't have to have on-prem, but it's gotta be my, my, my I-Team has to be able to walk to it. Right. So, but we all want to use cloud prints. I mean, the way we solve that, the past was he had, we had, we did, you know, embedded systems or on-prem systems, but now we have these, as you mentioned, I think that's a really important point that.

[00:25:38]A lot of the new applications are being built using cloud native principles. And so we have to have the cloud. So let's talk a little bit about cloud. I'm going to, I'm going to transition to Ridge which I know you want to talk about, but let's go back in time a little bit. Let's talk about, I mean, I guess it was edge suite where you were first introducing Programmatic elements into edge.

[00:25:58] So, so tell me about that. Tell me [00:26:00] about, cause that seems like to me a precursor to this edge cloud that we're talking about today. Well,

[00:26:05] Jonathan: [00:26:05] so I th yeah w so at Akamiai we started handling the very first thing that we knew how to handle for our customers were embedded objects in their webpages.

[00:26:15] We knew how to handle the JPEGs and those sorts of things in webpages. It sounds ridiculous to say that we started a business where the only thing we were going to deliver were the JPEGs off of websites. When I hear myself say it today, I'm just like, Oh man, is that really? But at the time, if you really looked at what was on a webpage.

[00:26:35] You know, 3% of it was the HTML container in 97% of it was the stuff that, you know, that was then going to populate the page. And this complaint in the late nineties of the worldwide wait was not about the HTML loading. Right. It was about all the images sort of painting themselves in slowly but surely.

[00:26:54]And so you know, when we're talking about dial up modems  and all of that kind of stuff, right. And in terms of [00:27:00] the throughput rates that work, that we're talking about you know, tens of kilobits per second of traffic, that we were trying to to get this stuff to, to function it. So you know, we over time at

[00:27:13] Understood that our customers were going to be more loyal to us. And we were going to have, you know, lower churn, better retention if we were able to do more and more stuff for them over time. And so we obviously wanted to be able to provide more than just, Hey, we'll handle your JPEGs. And so we built technologies that handled streaming media file delivery for, you know, for software downloads.

[00:27:34]And then eventually also. Being able to handle the actual, you know, the HTML and the page containers. And that meant really knowing how to operate a web server in a highly distributed fashion at scale. So, you know, at Akamai, we did in fact start to do some very early compute. The evolution of that has certainly been very strong and CDNs today.

[00:27:55] Do have out there offerings that will allow you to do some level of compute on [00:28:00] those platforms? Yeah.

[00:28:01] Matt: [00:28:01] Serverless type compute. Yeah.

[00:28:03] Jonathan: [00:28:03] And so, you know, CloudFlare, I think are sort of quite prominently known in the industry for how much they've kind of pioneered that, that path with, you know, with CloudFlare workers, you know, but Akamai's got one and limelight's got one and you know, all of the CDNs have the capability of running that type of workload.

[00:28:22] The difference is that the CDNs in what I see them doing are trying to run relatively lightweight and very often stateless, although that's changing a little bit applications in, you know, at the edge, right? And when you look at a CDN edge, as you said, Akamai has a quarter of a million servers out there.

[00:28:43] That edge is very distributed it's as you said, 135 countries, right? It's not a lot places. Yeah, the functionality that CDNs are offering to application owners is for very lightweight applications. The type of cloud capabilities that we are working on at Ridge, and that we [00:29:00] think are going to be really critical for this next generation of cloud native applications are full fledged applications.

[00:29:07] You need to be able to run. W you know, WordPress, Mongo you know, w whatever it is that you want to run and then of course your own application. So whatever you have built and want to deploy, the way that we look at it is that if you can put that, if you can containerize that application, if you can run that in Kubernetes you should be able to run that lots and lots of places.

[00:29:29] And that is the infrastructure that we at Ridge are building for the industry. Yeah. So

[00:29:35]Matt: [00:29:35] Let's talk about that. So, so you founded Ridge in, in, in 2018 and you know I hope you got some good founder stock at Akamai. And if I imagine that you held onto it for a little bit, you probably did not need to start another company.

[00:29:48] So clearly something captured your imagination. What was it that was going on in 2018 where you said, this is something that. That I can build and I can beat Akamai. I can beat [00:30:00] CloudFlare. I can beat Amazon or at least potentially what let's tell us about that. That's a it's I mean, it's pretty cool.

[00:30:06] Jonathan: [00:30:06] Well, so you know when you say it that way, call it a day. When when we started Ridge over two years ago, as you said, in 2018 I was looking at you know, I, as I said to you at the beginning, I've always really enjoyed being in the infrastructure industry.

[00:30:23] It has always felt you know, like it's not the sexiest place that you can be in, in terms of, you know, industry accolades, but it's absolutely. You always feel like you're building something that's mission critical, or you always feel like you're building something that, that, you know, Hey, if you didn't have this stuff out there, it just wouldn't work.

[00:30:42] Oh, that's really true.

[00:30:43] Matt: [00:30:43] I mean, I, this first month, my first venture into infrastructure and I definitely feel that way. So I definitely understand

[00:30:48] Jonathan: [00:30:48] that. Yeah. And, you know and Try to explain it to your kids, what it is that you do, but, you know but outside of that, it really is.

[00:30:54] It's a great place to, to be. And so after I left Akamai, I spent a good amount of time as [00:31:00] an investor at a venture capital firm, but I really missed being part of a team. I missed the day-to-day kind of building of a business. And I really, as I said, enjoy the infrastructure space. And one of the things that we were observing in 2018, my co-founders originally is that.

[00:31:17] There just didn't seem to be a lot of really good thinking about how to create very highly distributed infrastructure for compute. And if you have the background that I have coming out of having co-founded Akamai and having seen our original customers at Akamai, I build bigger and bigger. You know cages inside of data centers.

[00:31:40]You know, when we started Akamai in 97, if you were a content provider that all of a sudden became popular, the way that you solve that problem was that you picked up the phone and you called your sun Microsystems sales rep, and you bought more spark servers and you put them in a rack and you bought a bigger load balancer and more connectivity in the door.

[00:31:58] That's how you solve the problem. [00:32:00] And the cloud world. Well, nobody's buying spark servers and nobody's buying on load balance. Well, that's not how cloud native applications are being deployed. What we observed at Ridge is that it really did feel quite similar. You're an application owner and you've picked a place where you're going to run a cloud, that you're going to run it on.

[00:32:21] And as it gets more popular, you just use more and more of those resources and your builders get bigger and bigger. And eventually, you know, it gets huge and you start thinking about, maybe I should replicate it somewhere else. Maybe there is a way to divide it up, but really this is fundamentally centralized infrastructure in the way that right.

[00:32:37] I mean, you're

[00:32:37] Matt: [00:32:37] on Amazon and in the United States, you're making two choices, U S West or U S

[00:32:41] Jonathan: [00:32:41] East that's right. That's right. Absolutely. And you know and if you did decide that you were going to be multi-cloud and beyond AWS and, you know, GCP, their data centers are right across the street from each other.

[00:32:53]You know everybody's in the same,

[00:32:55] Matt: [00:32:55] I manage them by hand on a

[00:32:56] Jonathan: [00:32:56] spreadsheet, right? Yes. And everybody's in the same geography [00:33:00] as well. They should be. Those are the places that have the best activity that have the best, you know, appearing that like, it makes sense if you're going to pick a place, that's what it should be.

[00:33:10] But the argument that we're making at Ridge and that you guys are making a vapor is applications that are going to really change the world going forward. Are going to want more than just that very finite number of places. One of the things that happened at Akamai is that we, because this distributed infrastructure existed for content because the CDN was created.

[00:33:35] And because you could get scalability, reliability, and performance in ways that you never were going to be able to out of centralized infrastructure. Yes, our original customers, CNN, the New York times, their websites got a little bit faster. Yes. They were able to put richer imagery and do more complex stuff on their websites.

[00:33:54] But the other thing that happened is that Netflix happened [00:34:00] and, you know, Hulu happened and gaming, you know, online gaming happened.  And again, even if some of those companies were not ever on my customers, The reason that those capabilities exist in the marketplace. The reason that we have those types of services is because the CDN was created.

[00:34:21] And now all of a sudden you completely changed what entrepreneurs and what content owners believed to be the sort of the art of the possible, right. Then you can do totally different stuff.

[00:34:36] Matt: [00:34:36] Yeah, we haven't actually said these words and I'm going to say them and you tell me if I'm wrong, but it sounds like Ridge is building a distributed cloud, meaning.

[00:34:45] Instead of us Western us East. I can provision cloud resources in Chicago, Western Chicago, East, and you know Moscow, Western Moscow, Eastern or whatever. Is that, am I getting the general idea?

[00:34:57] Jonathan: [00:34:57] Yeah. Well, so what we're doing at Ridge, what we're doing at Ridge is we are, we're [00:35:00] building a very highly distributed cloud.

[00:35:02] We believe that applications. Th the cloud native applications that are being built today, that there are many of them out there that are going to care a lot about geography and care. A lot about being highly distributed, either for latency reasons, as we talked about earlier, or for the gr or for the data sovereignty and geographic reasons that we talked about earlier as well.

[00:35:22] So we are big believers that there's a whole set of applications that are already being built, and that we're going to see more and more of them show up that really do care about that. The cloud that we're building is quite different than sort of anything else that's out there because we are doing this by partnering with existing data center operators all over the world, the best space, power, and connectivity in every geography everywhere in the world is owned and operated by a data center company in that place, the best data centers in Tokyo or owned by.

[00:35:56] Japanese data center companies, the best data centers in [00:36:00] Frankfurt are owned by German data, central companies. Those data center companies or ISP is carriers. Those companies have over the years developed a lot of compute offerings for their customers. Sometimes they're VMware based sometimes they're, you know OpenStack based sometimes they're you know, a bare metal cloud, right?

[00:36:23] There are a lot of different ways that you can consume compute from those data center companies in these different geographies, all over the world. However, We haven't found any well, very not none, but  very few of those data center operators, despite the amazing space, power and connectivity that they have, despite the incredible compute and sort of virtualization stacks that they have available to their customers.

[00:36:48] They're today generally incapable of offering the managed services that are the hallmark of what a modern cloud native application wants. [00:37:00] They don't have managed databases. They don't have managed storage. They don't have managed Kubernetes. They don't have managed containers. All of these services that have become what today in the world, we are calling platform as a service, right?

[00:37:13] The PaaS offerings that are out there as opposed to the lower level infrastructure as a service offerings that are there. Simply don't exist in that data center and, you know and ISP and carrier world. And so what we're doing at Ridge is we're partnering with these operators all over the world, taking the best IaaS available in those markets and turning it into PaaS.

[00:37:38]Now the thing that we get to do beyond doing it in an individual market is by partnering with data centers in lots and lots of different markets. We get to federate that and create a truly global cloud, such that today in the earliest days of our commercial operations, we have over 80 different places where we could deploy managed container [00:38:00] or managed Kubernetes based workloads.

[00:38:02]In the same way as at Akamai my, we didn't own the underlying data centers and network at Ridge, we don't own the underlying data centers network, or for that matter of server capacity. But these data center operators out there who, as I said, space, power, connectivity, compute. They're great at. They just don't have the software layer that we're able to provide for it.

[00:38:24] Matt: [00:38:24] So basically your model is to find under utilized pockets of compute that sit in these edge networks. And as you say, federate them, bring them together through a universal software layer that developers can understand and program to so that I can build an application. I can be in Seattle and build an application that runs all over the world.

[00:38:44] Is that essentially the.

[00:38:46] Jonathan: [00:38:46] Absolutely. And the only thing that I might correct a little bit is that I'm not necessarily even looking for underutilized capacity it's capacity of any sort that's there, they're the data centers we are talking to are saying, wow, we've we want to be in this [00:39:00] business and we'll put dedicated capacity.

[00:39:02] Matt: [00:39:02] I see, right? Yeah. There's a lot of people who are incented to sort of forward deploy. I mean, even like HPE with GreenLake, like creating innovative business models, whereas pay as you grow and that feeds directly into, yeah. So back at Akamai, did you have to buy all your servers essentially or lease them yourselves?

[00:39:17] What

[00:39:18] Jonathan: [00:39:18] we bought all of our servers at . We we sort of, yeah, we had our own reference architecture for them. We bought them ourselves. We owned them. We shipped them out to the data centers where, you know, the local data center partner of ours would help us to to deploy them after the first eight that Danny and I drove out to Waltham, Massachusetts and the trunk of my Mazda, and didn't realize that we needed to bring our own tools.

[00:39:40] So screwed into a rack in Waltham using screwdriver off of the spare tire kit in my car. We, we only did a few of those ourselves before we realized that was not something that we were going to be that good at. And we asked our data center partners to help us, you know, kind of rack them and stack them.

[00:39:56] But yes, at Akamai, the actual hardware infrastructure is owned [00:40:00] by by Akamai. And at,Ridge  as I said, you know, these data center partners who we enjoy working with have. They have outstanding. I mean, you know who these guys are. They're great. They're

[00:40:11] Matt: [00:40:11] great. Yeah. Great. And they have good balance sheets and good investors and yeah, it's a, it's an interesting world.

[00:40:17] And I think, you know, one of the things that you know, I don't think we're ready to write the history books yet but I'm certainly sensing it that one of the things that's catalyzing. This transformation is the innovation in business models. You know, I mean, in my business, for instance, you know, we're in the business of shared infrastructure and the economics, I mean, to your point, right?

[00:40:37] The economics of shared infrastructure is so compelling. Like if there is a data center that somebody is already. Paid for and figured out how to amortize and all you have to pay for is a tiny piece of that. And you're sharing the, you know, the universal expenses with, you know, 20 other tenants or 150 other tenants.

[00:40:56] That's the economics, there's just so compelling. And I just feel like those [00:41:00] kinds of innovations are more readily available than they've ever

[00:41:03] Jonathan: [00:41:03] been. Yeah, I think that's certainly true. You know, I think one of the other things that we've found is that the data center operators and companies who we are partnering with and working with often when we describe what we've built and we described this software layer that will take their, you know, kind of, IaaS capability and create, you know managed services very modern kind of stack on top of what they have.

[00:41:28] You know, with a lot of them, their eyes kind of light up and they say, Oh yeah, we've got a bunch of customers. Who've been asking us. You know they're big customers of you know, a big data Lake that they have stored here, or a bunch of, kind of, you know legacy applications that they're running in cages in my data center, but they also have built a bunch of new applications and I'm their preferred service provider.

[00:41:50] And they're asking me if they can buy a managed container service from me, or can they buy a managed Kubernetes service from me. And my answer to them is now, you know, I don't have that yet. And [00:42:00] what we've been able to do is to allow those data center operators to go back to that customer and say, hold on a second.

[00:42:05] Actually, I do, I've got one. Yeah, right next to the cage that you already know and love with the same account manager with the same, you know, excellent pricing with the same you know the same stuff that you were so committed to to, you know, to using at a large scale data center that as we've said, and you know, we're truly exceptional operators.

[00:42:25]Yes. I now have those more modern services on top of this stack and, you know, the way that we approach it is. If you can run an application on you know, without without picking on any of the hyperscalers, more than the other, you know but with on GKE, for example, if you have a Kubernetes-based application, it can run on GKE.

[00:42:42] You should be able to pick that up and without running, without changing a line of code, run that at any data center on the Ridge map. Any of the places that we have pick that up location up from GKE, drop it down and run it, you know, in any one of these geographies that we have.

[00:42:59] Matt: [00:42:59] Yeah. So that leads [00:43:00] to a really interesting question that I have.

[00:43:02] So, so I understand that one of the. The power of, you know, platform like Kubernetes is to be able to, you know, with the containers and everything to basically have this be write once run many places.  But that's just a mini problem, a mini version of the problem we have today, which is how do I run it in us, West and us East and on GK and Azure and Amazon.

[00:43:22] But there's a more interesting. A question, which is so, so there's a difference in my mind between a cloud native application and a highly distributed cloud native application. And as I'm sure, you know, true distributed computing is actually really hard. Yes.  And you know, how do you coordinate, you know, how do you know Amazon?

[00:43:42] I mean, Google use as atomic clocks to coordinate, you know, database rights across the world, how. How has Ridge helping to solve those higher order problems if

[00:43:52] Jonathan: [00:43:52] at all? Yeah. So what, so it's a great question. And as you said, you know, truly distributed compute is incredibly difficult. [00:44:00] And you know the thing that we are the thing that we're focused on is understanding from the application owners standpoint, what functions can in fact become distributed applications and which ones can't.

[00:44:15] And we are not going to claim that every single application that is running you know, on AWS today should move over to a distributed you know, computational model. We also, aren't going to say that, you know there, there are going to be certain distributed applications that will work well, but will require.

[00:44:35]Dedicated connectivity, for example, between the facilities, right?  For a database reason or for you know, synchronization, you know, reason.  And those are applications that we kind of look at and we say, okay, that might not be w you know, rich might not be the right platform for that quite yet, ridges right into that day's world for applications that are going to be able to run as.

[00:44:57] Again, a managed Kubernetes managed container or [00:45:00] a managed service application in a you know, in a bunch of discrete geographies with the ability to tie back to whatever systems it means

[00:45:10] Matt: [00:45:10] and replication of the core workload. And then maybe there's some, you know, some coordination with core workloads and all that, but it's not necessarily true.

[00:45:19] Yeah. And I think you can get pretty far with that certainly. Right. And it sounds like you're. Watching what your customers are asking for. And some of the innovations are happening because there are some really innovations happening, you know, with the service mesh people and stuff like that to, to, you know, create applications that more autonomously run across multiple geographies.

[00:45:38] But it's still very much a computer science problem that hasn't completely been solved.

[00:45:42] Jonathan: [00:45:42] Yeah. No you're absolutely right. You know, and there are a lot of kind of interconnect problems. Between facilities, for example, that, you know, 10 or 15 years ago, we would've said that the only way to solve them is with dedicated fiber.

[00:45:59] You know, [00:46:00] and today we're solving them with some kind of virtualized network layer that connects between places. And look, you guys see this at, you know at vapor, right? There are there, you guys have a bunch of kind of applications that people care about that, you know, need the dedicated connectivity that you're bringing to.

[00:46:17] Certainly the facilities and others were, you know, nailing up virtual circuits. Does the trick.

[00:46:23] Matt: [00:46:23] Yeah. So you mentioned that today a customer of Ridge can deploy in 80 locations. Is that right? Is that I hear that correctly? Yeah. Yeah. It's ready locations.  What are the most interesting use cases that your early customers have brought to your plant?

[00:46:36]Jonathan: [00:46:36] So the place that we've had some success early on, I'll give you a couple of examples. One is a company that provides a virtual desktop and remote browsing services. For most of their customers are in the financial services industry. And you know, these are our companies who are you know, have a whole approach to how you make sure that you're being your actual super important during COVID. [00:47:00]

[00:47:00] Totally. So what do you do when people are on the road, but also when you want people around the office? Yeah. How do you make sure that the fact that everybody sitting, you know, everybody who has an email address@amorganstanley.com, right? Has a web browser that can open up and mean that lots of different people can get to that machine.

[00:47:15]And so there's a whole set of technologies around both the virtual desktop side of that. And more specifically the remote browsing. Side of that that are, they're very interested in high levels of geographic distribution. That's been one place we've seen a lot of interest in this kind of edge compute capability remote monitoring, and.

[00:47:32] And quality monitoring. So, you know, a thousand eyes was acquired recently. Right. And but you know, we know some people who sort of say, Hey, listen, if a thousand eyes is good, then 10,000 eyes has got to be better. Right. And so the idea that you can actually have. Very highly distributed virtualized infrastructure in lots of different places is, you know, is another is another thing that people that we've seen happen.

[00:47:54] We've seen a lot of ad tech and the and the inference [00:48:00] part of AI and machine learning technologies want to be distributed to be much closer to end users. So we see a lot of you know, kind of. Like I said, ad tech, especially sort of computationally intensive kind of content creation and ad selection and user influencing.

[00:48:17] A lot of those sorts of things that, you know, these companies have said to their their customers have said to them, listen, I love the fact that you're going to do some really cool stuff and make sure that the perfect thing ends up in front of that user. But if you slow my site down, if you slow my content down, if you slow my overall interaction with them down, I don't care that you've optimized, what you're putting in front of them, you've ruined the experience for them.

[00:48:40] And so we're seeing a lot of interest from from those types of you know, of applications and use cases. And then finally, you know, I think whenever you talk about latency, people immediately go to gaming as one of the interesting spots. And we certainly have seen. You know, some next generation games where the architecture teams [00:49:00] are already starting to think about whether they they should embrace a much more geographically distributed model than they are currently embracing.

[00:49:08]We haven't seen any of those, you know, turn into, to live customer traffic yet, but there's a fair amount of conversations around the design cycle time for the next generation of games.

[00:49:19] Matt: [00:49:19] Yeah. So the top of the interview I kind of teased you about, you know, competing against Amazon and Google and all these large cloud providers, but you actually just recently wrote it.

[00:49:27] Yeah. An article saying that the future will be hyperscaler and distributed cloud, not hyperscaler or distributed cloud. And so I imagine you have a plan to support your customers that and maybe even partner with the large hyperscalers what can you tell us about. You're thinking along those lines.

[00:49:42] Jonathan: [00:49:42] Oh, well, so, you know the way that I think about that, we think about this at Ridge is I can't imagine that if we sit down and have a conversation five years from now about the cloud, that, you know, three companies have [00:50:00] a hundred percent market share of that industry. It just, we've never seen an infrastructure.

[00:50:07]You know, capability or an infrastructure company become a full on monopoly in. You know, in the infrastructure world, even AOL at their biggest and baddest and, you know and and most powerful, they did corner the

[00:50:22] Matt: [00:50:22] market on CD mailers though.

[00:50:24] Jonathan: [00:50:24] Yeah, they did on on college dorm coasters, they did between theirs and, you know, in the EarthLink ones that was the, those were the collector's items.

[00:50:33]But the You know, even at their peak AOL, I think representative Akamai, we observed at their peak, they observed, they represented something like 16% of internet traffic consumption in the United States. And so, you know, the idea that we're going to have an infrastructure provider that will be able to do everything that a company needs them to do for them all over the world.

[00:50:57] And every single location just doesn't make a [00:51:00] lot of sense to me. We see things like Kubernetes as, you know, as much as it's a standard and a technology, we sort of view it as a protocol. This is how application owners are making sure that they will be able to run their application in lots of different places.

[00:51:18]I believe that our customers are going to be using us plus some number of other clouds. I don't think that. People are going to pick up and leave, you know Azure to to run on Ridge. I think that they're going to find geographies all over the world that matter to them latency sensitive applications, where they're going to want a  very high level of geographic distribution.

[00:51:41] And they're going to say great. I'm going to figure out how to have a multi-cloud environment. Where I'm using, you know, a hyperscaler or two, maybe I'm using a w we're already seeing this today, by the way. Right? In China. The massive kind of growth that we've seen at Ali cloud and at Tencent has come from companies saying, [00:52:00] okay, I get it.

[00:52:01] The hyperscalers, aren't great in those markets, I'm going to run on a hyperscaler and a lot of places. And then I'm also gonna use Ali cloud to make sure I do a great job in China. Okay. So we already have this paradigm of multiple clouds. For different geographies.

[00:52:17] Matt: [00:52:17] Yeah. And it just occurred to me that, you know you know, I mentioned the power of shared infrastructure and shared costs and open source is really just shared infrastructure in a sense.

[00:52:27] And so it is one of the catalyst, right. The fact that. Kubernetes existed. And a lot of other companies, you know, IBM, red hat, Google, you know, all are contributing to this code base and that's causing people to adopt it. But also you've got a free, I'll put that in air quotes, free software that that you can implement as a protocol that other people understand is really empowering.

[00:52:52] That's really neat. So let's look into the future. Okay. Where do you see the tipping point? I mean, at what point are we [00:53:00] just talking about edge as if it's part of the cloud? Like let's just cloud native developers incorporate edge or edge native developers are, you know, using the entirety of the cloud.

[00:53:12] Like at what point does the edge of the cloud just become this one big thing in people's minds? Like the generation of developers?

[00:53:19] Jonathan: [00:53:19] Yeah, it's a great question. Well I think that what happens is that, you know what today we would call the edge becomes something that is, you know, kind of subsumed in the core and then your next edge kind of thing.

[00:53:37] I think that we're going to end up in a world. So, so again, going back to my orders of magnitude kind of analogy at the beginning, right? If you want hundreds of milliseconds of latency, the core kind of is the edge already today, right? For again, I'm very large percentage of the population. Sure. If you want tens of milliseconds of latency, we're building this specialized thing that people are calling edge in [00:54:00] order to handle that, but in the not terribly distant future, you know, going back to my history of Akamai, right?

[00:54:07] Like 1998 broadband. Is, you know, it is my cell phone when it's about to die on its last, you know, line of battery and his last line of connectivity. So, so as we move over time, sort of what we call something in these definitional terms of something really shift as well. And so I think that if we look out a few years from now, these tens of milliseconds kind of applications, aren't going to be something that we think about as a specialized edge application anymore.

[00:54:39]Right. That's it. It's just to be part of the cloud.

[00:54:42] Matt: [00:54:42] Yeah. That's really interesting. I mean, we have customers today at my company that are demanding 75 microsecond latencies. Right. But you're right. That is a really specific bespoke application. It's not, but eventually it's just going to be subsumed into the cloud.

[00:54:56] Yeah.

[00:54:56]Jonathan: [00:54:56] We're a ways away from 75 microsecond being something everybody [00:55:00] gets all the time. But we're not that far away from tens of milliseconds kind of way of looking at this. Right. And when we get the tens of milliseconds as being something that people are accustomed to when everybody gets right, like right now, who at their house, doesn't have a 50 minute megabits per second of conductivity.

[00:55:19] Oh, that's just connectivity. Right. Yeah, that's right. That's right. You have a gigabit per second,

[00:55:26] Matt: [00:55:26] 600 9,600 baud

[00:55:28] Jonathan: [00:55:28] modem was a big deal. Exactly. In five years from now, when everybody's got a gigabit per second to their high, you know to, to their you know to their house,

[00:55:38] Matt: [00:55:38] they're there.

[00:55:39] Factory there rain sensor.

[00:55:43] Jonathan: [00:55:43] Totally. So that to me is how, you know, how I think about it. So I think that if we talk about five years from now, is your question. I would imagine the milliseconds of the vast majority of users in the world is sort of rigor. That's kind of, if you're a cloud you know, if [00:56:00] you're an application owner, you are going to know how to get that again.

[00:56:03] You might not get it from one vendor. I don't think you're going to get that just from. You know, AWS, I think you're going to get that from what if five years from now we'll be what we call cloud architecture. And I think cloud architecture is going to be what today we're calling edge cloud architecture, and then five years from now, we'll sit down and we'll talk about how hard it is to get the single digit milliseconds of latency and how that's the, you know, that's going to be two, 2025's edge cloud.

[00:56:31] Matt: [00:56:31] Yeah. So if you're a developer today and you know you're kicking yourself for 10 years ago, not becoming a mobile developer in five years, definitely go for not become a cloud native developer. You should consider becoming an edge native developer because Jonathan says it's here in the next five years and that's about the right timeframe.

[00:56:48] So what do you see as the biggest challenges? I mean, if you could look out into the, you know the near future, you know, the next 10 years and identify, you know, the dominoes that have to topple to like, make [00:57:00] this a mass phenomenon, like what's the, which is the one domino that you would push.

[00:57:05] Jonathan: [00:57:05] Oh boy. Well, it's a big domino and it has it's Yeah, it's not, it's even, it's more than a domino.

[00:57:11] So I'm gonna cheat a little bit in this answer, but you know, for this to work for these ideas to work, this is an ecosystem play, right? One of the things that makes the hyperscale clouds so powerful is the ecosystems that they have developed around their platform. And there's a lot of convenience that comes from pickup, a single hyperscale platform that you're going to use as your infrastructure.

[00:57:36] And having a whole bunch of people who are certified and know, and, you know and understand that platform do work for you all the time. We are really big believers in things like, you know, Terraform and things like a rancher and things like, you know the third party kind of orchestration CICD all of those capabilities, [00:58:00] which are, if you're going to be a multi-cloud application.

[00:58:05] You know, you need to have to pull together as an ecosystem play as opposed to a single vendor. You know, play that to me is the thing that is going to you know, to see the most that's going to require the most you know, kind of advancement for this to become more mainstream and more accessible.

[00:58:23]Over time and, you know, you guys at vapor have been, you know really you guys have really led in this way, right? I mean, Cole Crawford, your founder and CEO, who I know pretty well talks about this a lot. Like. None of us are going to solve this on our own. This is going to be an ecosystem you know, play it.

[00:58:43] And we're going to need to have when a customer walks in the door and says, okay, I love the idea that you can get me a managed service offering in a bunch of geographies that I really care about and where I don't have any infrastructure today. You're really solving a very real problem for me today, but Hey, how am I going to, you know, [00:59:00] make this mesh together with the stuff I'm already running on GKE.

[00:59:04]How am I going to make the and to have a full ecosystem that can support that and make that easy is going to be hugely important.

[00:59:14] Matt: [00:59:14] So rich is open for business.  What kind of customers are you looking to talk to and how can they get ahold of you? Oh,

[00:59:20] Jonathan: [00:59:20] well, thank you for the final kind of plug on the way out the door.

[00:59:24] Perfect. The so, so Ridge is open for business. As as we mentioned at the beginning, we've been building this for over two years. We are able to offer. Managed services on top of absolutely top tier infrastructure in many different geographies around the world. We are looking for a couple of different types of customers and partners.

[00:59:41] We are looking to talk to any. Cloud native application owner that is interested in finding deployment capability in lots of places around the world. If it's not going to be just, you know, Seacaucus New Jersey and Frankfurt then th then we absolutely have. [01:00:00] You know, all those geographies covered, we have Seacaucus and Frankfurt as well, but, you know, we're uniquely qualified to provide infrastructure in a lot of different geographies around the world.

[01:00:09] And the other thing that we're very interested in is data center operators to have you know virtual offerings for their customers, but don't have. The managed services that modern customers are looking for, walk in the door saying, Hey, I love this infrastructure service that you have for me, but I really am looking for a platform as a service offering.

[01:00:31] I want to buy a managed Kubernetes offering. I don't want to run it myself. Data center operators who you know, who have that type of interaction with their customers. And we know that there are many of them out there that are having those exact conversations. With some of their top customers w we, we can absolutely partner with you to solve that problem and  to thrill your customers with you know, a very modern offering on your existing stack.

[01:00:56] Are you hiring? We are hiring for our [01:01:00] sales team in the United States. I'm based in Boston, Massachusetts but like everybody or Cambridge, Massachusetts, but like all of us, I don't go to an office anymore. So we are very flexible in terms of where people come to us from our development team.

[01:01:13]Is also able to work in, in, in a bunch of different geographies. My co-founder and our CTO is running the bulk of that team out of Israel. But we are absolutely open to folks on our on our full stack development team in other geographies as well. That's

[01:01:27] Matt: [01:01:27] awesome. And it's ridge.co back qto that, not the odd you know, expansion of top level top level domain.

[01:01:33] Yeah. Yeah. ridge.co Well, Jonathan,

[01:01:37] Jonathan: [01:01:37] either that or Ridge cloud network, some of the time.com.

[01:01:44] Matt: [01:01:44] Awesome. I think that's a better short URL. Jonathan, thank you so much for joining us here at edge computer world and on the over the edge podcast this will be available at the edge computing world. You know the videos that are available and we'll also publish a version of this on the, [01:02:00] over the edge podcast, which again, you can find it overtheedgepodcast.com and.

[01:02:04] Jonathan. I really appreciate you joining with us. It's been a fabulous conversation and I look forward to walking this

[01:02:10] Jonathan: [01:02:10] journey with you. That's great. Hey Matt, it's nice to spend some time with you. Thank you very much for the opportunity and thank you to people who who tuned in and listened and watched us much appreciated.

[01:02:19] Matt: [01:02:19] Awesome. Enjoy the rest of your week through Jonathan.