Over The Edge

Distributed Edge Cloud: A True Greenfield Opportunity with Lee Hetherington, VP of Technology at Ori Industries

Episode Summary

Today’s episode features an interview between Matt Trifiro and Lee Hetherington, VP of Technology at Ori Industries. In this interview, Lee dives deep into the technological challenges of building the next generation of cloud, illustrating his and Ori’s approach to solving the infrastructure, software, networking, and business problems of constructing the globally-distributed edge cloud of the future.

Episode Notes

Today’s episode features an interview between Matt Trifiro and Lee Hetherington, VP of Technology at Ori Industries.

Lee has been in the infrastructure and networking space for over twenty years, and prior to joining Ori, spent the last five years setting strategy and focusing on edge and content delivery for two of the world’s largest hyper-scalers.

In this interview, Lee dives deep into the technological challenges of building the next generation of cloud, illustrating his and Ori’s approach to solving the infrastructure, software, networking, and business problems of constructing the globally-distributed edge cloud of the future.

Key Quotes

 

Sponsors

Over the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.

The featured sponsor of this episode of Over the Edge is Catchpoint. Catchpoint gives critical knowledge to help optimize the digital experience of your customers and employees. Learn more at catchpoint.com and sign up for a free trial.

Links

Connect with Matt on LinkedIn

Follow Lee on Twitter

Episode Transcription

[00:00:00] Matt: [00:00:00] Hi, this is Matt Trifiro was CMO of edge infrastructure company, vapor IO, and co-chair of the Linux foundation, state of the edge project. Today. I'm here with Lee Hetherington VP of technology at Ori industries. We're going to talk about Lee's experience setting ed strategy for two of the biggest hyperscalers in the world and how he's building next generation of cloud computing edge infrastructure at Ori.

[00:00:20] Hey Lee, how are you doing today?

[00:00:22] Lee: [00:00:22] Good. So I'm glad

[00:00:23] Matt: [00:00:23] to be here. That's terrific. So what are ways I really love to start? These interviews is just to ask people how you got into technology. Like, what are your early

[00:00:31] Lee: [00:00:31] memories? I think the same as everybody else really started at a young age, got Zabbix spectrum, played games, loading them from tape, all the kind of fun stuff that goes with that.

[00:00:40] And then what is the Xanax spectrum? Really old technology now. I mean, I'm sure my age, but. Yeah, and then moved on to Commodore 64 and then left school and went to work for,

[00:00:50] Matt: [00:00:50] I was the product manager for the geos operating system.

[00:00:53] Lee: [00:00:53] Interesting.

[00:00:55] Matt: [00:00:55] The counter 60 16, which at one point was the largest installed base operating system in the world because it [00:01:00] shipped with every single Commodore 64.

[00:01:01] Lee: [00:01:01] Wow. Outside of the Commodore 64 and a five and a quarter inch floppy disk drive. Love it. Yeah. Yeah. And then moved on to leaving school and working for an insurance company in their it department before joining an ISP and really getting involved and interested in the internet space. What

[00:01:16] Matt: [00:01:16] did programming capture your interest?

[00:01:18] I

[00:01:19] Lee: [00:01:19] don't think it ever really did. I mean, I'm one of those hacker type programmers. I can take somebody else's code and make it do I want, but you know, In my heart. I'm an infrastructure network guy, not a not a

[00:01:28] Matt: [00:01:28] software guy. So, you know, the original Conor, 60 fours weren't networking. What was the, okay, so what was the infrastructure networking bug that bit?

[00:01:35] You,

[00:01:36]Lee: [00:01:36] I think when I. Left working in e-commerce hosting. We had some problems. We need to solve them with the network. We were behind one provider who provided terrible service. And so the managing director of the company came to me and said, Hey, you need to build one of these multi home networks. We need BGP and all these things go figure out how to do it.

[00:01:55] And I think solving those problems and building the first. My, my first multihull network [00:02:00] was really what got the bug for me moving quickly from there to message labs and actually working on something of scale, you know, message types being the kind of first cloud type thing, I guess you know, email scanning in the cloud or SAAS as it was then.

[00:02:14] So that's what really got me. Yeah.

[00:02:16] Matt: [00:02:16] Yeah. And you know you joined Ori fairly recently, but you came out of a couple stints in other infrastructure related companies. Can you tell us a little bit about the work you did just prior to Ori.

[00:02:26] Lee: [00:02:26] Yeah. Sure. So after leaving Symantec, building global networks for them, I joined AWS as a technical developer.

[00:02:34] And so I was working in the edge team, looking at relationships or tacos and where to build Pop's to get better distribution of content, working with the CDN teams, those kinds of things, and then progress through to Facebook, which was my most recent thing before Ori building a really distributed edge with embedded cashing inside telcos.

[00:02:54]And those kinds of things, like really seeing scale. And that's what really got the bug for edge for [00:03:00] me.

[00:03:00] Matt: [00:03:00] Oh, that's exciting. So, so, so tell me some of the most important things you learned about edge computing while doing those projects? I

[00:03:08] Lee: [00:03:08] think distribution is really important.

[00:03:11] The internet is not built in any way on geography, as we all know. And so. Thinking that you can go and deploy some kind of compute in a city and expecting to people in houses next door to each other, to be able to use that same piece of infrastructure

[00:03:26] Matt: [00:03:26] people don't understand that. So what I would love for you to do is to walk us through it, like, how has the internet built and where w why is this such a challenge?

[00:03:34] Because most people it's just magic. Like the data shows up and. My movie shows up and like, why are you guys worried about all this stuff? Yeah,

[00:03:40] Lee: [00:03:40] exactly. I mean, if you look at how the Internet's built users on old was broken out where you would think it's expensive, the networks just aren't built in that way.

[00:03:50] And so if you take something like the UK, which is where I am lots of. User connections are back home to a big city like London before they're broken out to the quote-unquote internet, [00:04:00] two users in the same city, you might not actually be able to send traffic to each other in the same city, for example, without going all the way back to London, which for me, from my house here is 15 milliseconds or so.

[00:04:10] So to get to my neighbors likely, you know, at least 30 milliseconds to go there and buy. Why

[00:04:15] Matt: [00:04:15] aren't those networks pairing with each other? In the Cotswold where you are, there

[00:04:19] Lee: [00:04:19] just isn't enough traffic in their local geography actually to actually do that. And so they just don't do that. I mean, don't networks were more cost-effective to buy big trunks all the way back to big city where, you know, those kinds of connections.

[00:04:30] So it was much easier to do that equipment as much more expensive. The local kind of interconnection market didn't exist back then. You know, companies like vapor, for example, didn't exist. There was no way to do that. You know, is there a common facility in the Cotswolds somewhere where all of the telcos can?

[00:04:46] Well,

[00:04:46] Matt: [00:04:46] yeah, and that's what very few people, I mean, people like you and I are living in this infrastructure world, understand this, but you get out of this world even just a little bit, you get into like the cloud native world and developers and they don't have to know how the internet [00:05:00] works. Really. Right.

[00:05:01] They don't have to know, you know, these things, at least they haven't in the past that may be maybe changing, but it's, it really is quite a miraculous, a Rube Goldberg system. And the fact that there are moving parts and. Fixed infrastructure. That's expensive. It's expensive to build a data center. It's expensive to build an exchange point.

[00:05:23] It sucks. You know, it's like, you're exactly right there and it's hard to move them, right? Like if you've got fiber going from one place to the other and you need to go somewhere else, because you need a peer with somebody, like you might have to get permits and dig up streets and pull laterals. And yeah.

[00:05:36] So I think that's something that very few people really appreciate that there is this infrastructure and we've managed to abstract away. All of it in some senses from most users, including developers, but when you start pushing the edge of performance or use cases that use a lot of data or have low latency requirements, you start to see the [00:06:00] elbows in the infrastructure.

[00:06:02] So talk about, you're talking about distribution, right? So let's go back to where you're saying like two people, even on the same network or on adjacent networks. That might, there might be 30 milliseconds, even though we're standing next to each other in the same room. How is the world going to fix this collectively?

[00:06:16] And what part is Ori industries playing in

[00:06:18] Lee: [00:06:18] that? So really interesting question. So I think there are a couple of parts to this, so. The Networks need to connect more closely things like 5g, which are bringing user desegregation so that users can be brought onto IP much sooner. It's going to really help with this, but there needs to be facilities in order to allow for local interconnection, but there needs to be a desire to do that.

[00:06:40]What's really interesting is not every city that's going to be possible. There's just not enough traffic or not enough demand to actually go and build the infrastructure. Can't take the infrastructure, you know, as you were just talking about all of this infrastructure is very old. It's been around a long time, especially when you talk about fiber networks, some of the routes that are going to be required in the [00:07:00] future don't even exist today.

[00:07:01] Is there any demand to create it? Can you even get permits to build infrastructure in some of those markets? It's a very difficult thing. And so where a company like Ori comes in as we're building a distributed cloud, ultimately, where we're building a cloud, that will be possible to sit in each network.

[00:07:18] And so perhaps we can't be sharing the same piece of infrastructure when we're close to each other, but perhaps we can actually be using pieces of instruction inside our respective telcos, which can then. You know, federate back to something much larger or a big public cloud, like one of the hyperscalers.

[00:07:35] Yeah.

[00:07:35] Matt: [00:07:35] So, you know, in the us, Amazon is running experiments and I'm just picking Amazon, all the cloud providers are doing this running experiments with telcos where presumably they're putting their cloud resources, you know, quote unquote in the network. But they, when they're very little latency hop from Verizon's network, probably whether or not it's actually in Verizon's facilities or not, it feels like it.

[00:07:55] What is what Ori is doing? How is that different from. What Amazon [00:08:00] already does and is probably going to continue doing, moving farther out to the edge.  What are you doing that's different and is a value to developers.

[00:08:08] Lee: [00:08:08] We're going much further down the stack. And so, you know, Amazon might be deploying some, you know, single rack or multi rack deployment, which lives in the big city.

[00:08:19] And it's very suited to that. What we're interested in building is either onboarding assets that already exist. So the telco has. Maybe they have virtualization somewhere far out on the edge already because they need it for NFE or something similar, and they have capacity. They can onboard those assets to our infrastructure and they can start to deploy customer workloads there.

[00:08:39] Or we build a very small appliance, which could go much further out to the edge. So we're talking, you know, single server type deployments that can go street cabinet, bottom of a cell tower. Those kinds of deployments. We're looking to go much further out. We're also then building software to allow the developers to actually take control of that infrastructure.

[00:08:58] And so [00:09:00] it's all very well and good having a user dashboard that you log into and you've got 10,000 locations that you could deploy your application, where the hell do you stop? How do you pick the location where you need to deploy your app? Really? The users consuming your apps should be the ones deciding where to place the app.

[00:09:16] And so that's what we're building

[00:09:17] Matt: [00:09:17] for people who aren't in this world. I mean, let's simplify it for them. So in the United States, if you want to deploy a workload on Amazon, you want to deploy an EC2 instance and you're not like some weird bespoke, special customer using the standard Amazon dashboard.

[00:09:31] You get two choices. U S Western USA, I mean a couple availability zones but they're essentially, you know, it's on the two coasts and that's it. In fact, that's where all the cloud providers have their data centers. I mean, Azure has got a few more spread out, but even if you look worldwide and a major cloud provider, it's maybe a hundred data centers that you have to worry about a hundred locales, you have to worry about.

[00:09:50] And. A human can manage that on a spreadsheet. It's cumbersome, but you can manage on a spreadsheet when you get to a thousand nodes. There's just no way. So, so the world's going to change, like my view of [00:10:00] how I distribute my application is going to change how what's your vision of what that will look like.

[00:10:05] So

[00:10:05] Lee: [00:10:05] ultimate it's about solving exactly the challenge. You just said, a UX with lots of locations or even a spreadsheet, or even a piece of software that you write that goes and pokes an API to spin things up. They're all very well and good, but actually. This infrastructure as it scales is not going to be as large as the clouds.

[00:10:23] I mean, you just mentioned there are, you know, sub 100 locations around the world, they're all huge multi hundreds of megawatts of power. That's not what the edge is going to be. The edge is going to be tens of thousands, hundreds of thousands of very small deployments and actually taking control of that.

[00:10:38] It's going to be very hard. And so what we're building is a software layer that sits on top of this that can. Look at user requests coming in and decide which locations closest to the user. So imagine all of the problems, the CDN solved 10, 15 years ago with distributed infrastructure and serving your pictures and videos from somewhere close by.

[00:10:59] We're now going to [00:11:00] have tens of locals that we can serve those things from. And so there are lots of characteristics that the software needs to take into consideration. So what kind of latency thresholds do you have for your application? How much compute resource do you need? How many other customers can we fit on a piece of compute who competes for who?

[00:11:16] Those kind of things. It's quite a complex, I guess, game of Tetris.

[00:11:20] Matt: [00:11:20] Really? Certainly. Yeah. And you look at like, you know, the Kubernetes orchestration system or the Mesa orchestration system, and you can see some of the early thinking around this, where it's doing complex placement across 40,000 servers in a single data center.

[00:11:37] So it's an higher order problem, but it's. It's a problem on those systems are on that path too. How do you orchestrate containers? For example, it could be VMs containers, whatever the unit of code is, how do you orchestrate that unit of code across more S more servers than a human can put in their head at timescales that humans don't operate at?

[00:11:58] You know, like I want, I [00:12:00] might want to move workloads, you know, multiple times per second, because. Things are changing. I mean, at some point, maybe not now. So related to that, so how do I express what I want from. From Ori industries as a developer.

[00:12:14] Lee: [00:12:14] So we have the same kind of tools as the big clouds, you know, w we're you know, you mentioqned Kubernetes, we're building our Kubernetes, we're building things that go alongside Kubernetes.

[00:12:24] So

[00:12:24] Matt: [00:12:24] do I have to be a Kubernetes developer to use

[00:12:26] Lee: [00:12:26] Ori? No, not at all. We have a UX, you can point our UX at your private container registry. You can pull your containers in and we can orchestrate this for you. You don't need to be writing code to point to API APIs and those kinds of things.

[00:12:41] What you can then do is enable some of our smart technology. So you don't need to worry about where those containers get placed. You can place them in an initial location. We can start serving requests. We have something that we're calling cloud deferral, which would essentially allow any of our edge locations to accept the user request and then transmit them across our [00:13:00] network.

[00:13:00] To where the container is running. And then through various rules that you can set up in our UI, we would allow you to extend sheet that container somewhere else.

[00:13:10] Matt: [00:13:10] So I can use any, you know, if I'm a container based developer it'll fit right into your system. How do I, you know, again, what are the criteria and how do I express this workload needs?

[00:13:21] To be here or needs to have this latency, or like, what are the inputs that, that you allow me? Like, what are the hints that I can give you so that you can do the orchestration better? How does that work

[00:13:33] Lee: [00:13:33] now in the UI and with our API is your able to select what they call it, the gradient. So we have a local gradient, which could be.

[00:13:41] Machine's distributed right out on the engine side of telecom network. We have a central gradient, which is, you know, things in the CEO or in the core of the network. And then we have the Metro gradient, which is our kind of pops, which will, which, and the end state will be, you know, globally distributed.

[00:13:57] You're able to select from these today, what we're building. And then [00:14:00] in the current phase is. Smarter technology to then allow you to build rules. So now we're talking about things like latency, thresholds, number of requests per second, before we can spin up a container for you somewhere else. All of those

[00:14:13] Matt: [00:14:13] kinds of things.

[00:14:13] And how do you deal with things like things happening real time in the network that maybe affect the quality of service negatively? And so you want to move me? How does that

[00:14:23] Lee: [00:14:23] work? Yeah. So exactly like that, we, you know, we're taking measurements from the client by either injecting, JavaScript into HTTP requests so that we can start to get telemetry information, or we have what we're calling a smart edge SDK, which is essentially an SDK that you, as an application developer can embed in your code.

[00:14:42] So. What we're looking for here is ways for your application running on your cell phone or on your laptop, or whatever, to be able to send us telemetry information back. So it can tell us this request and make it this request too long, wherever that may be, we can get a signal. We can also get a signal from [00:15:00] other people using other apps on the network.

[00:15:02] Matt: [00:15:02] And it probably doesn't have to even come from the end point. Like, could you pull congestion out information off the wireless ran if it was available?

[00:15:09] Lee: [00:15:09] Exactly. That it's all about working with and partnering with the telcos to get telemetry information from them. I mean, it's in the Telco's best interests to have us back off when this congestion and the tower, et cetera, because that's.

[00:15:22] To use for the telco to

[00:15:23] Matt: [00:15:23] now, do you provide any load balancing services or is that my problem as a developer? No. No.

[00:15:28] Lee: [00:15:28] We have low balance. And so you can balance between containers in the same location. We have global load balancing so that you can distribute load around between different nodes.

[00:15:35] And then we obviously. We have something that we're calling intelligent workload placement, which takes that one stage further and takes that low balancing information. And like the CDNs would do, we would take feed of BGP prefixes at every location from the telcos so that we can work out, which sub-net you're in which cash is the closest or has the best route to that.

[00:15:56] So that's something

[00:15:57] Matt: [00:15:57] that, how do you imagine dealing [00:16:00] with constraints? And let me give you a couple of examples. So a simple example is multiple customers of yours. Want to run workloads in a certain location because they need that quality of service, but there just isn't enough compute there at this moment to serve them all.

[00:16:15] I mean, is it firsthand? Is do I, as an auction base and I pay more if I want, I mean what are your, what are the strategies that you're thinking of to deal with? Let's start with that constraint with the constraint of capacity.

[00:16:25] Lee: [00:16:25] Yeah. I think some of those business questions that are above me, you know, I'm building the technology to enable this, how the business.

[00:16:33] The sides, auctioning those kinds of things. It's all, they're all very fair questions. This is going to be something that's very important with edge. Let's be honest. There's lots of these locations, which are very deep into the networks. I'm not going to be huge. And so this is all about having more locations than we need.

[00:16:50] And so that we can move things. Know, if iyou've given me a latency requirement of five to 10 milliseconds and we have something at five. But somebody comes along and the need [00:17:00] five, perhaps we can move you to something that's seven milliseconds away and you're still happy. It's I think it's a big checks and balances exercise.

[00:17:08] Matt: [00:17:08] Yeah. And it really you're. Right. It really is a business problem because, you know if I'm running some workload that say has life safety implications or first responder implications or something like that, then I want a guaranteed quality of service and I should be willing to pay for it. Right.

[00:17:24] Exactly. Whereas if I'm just providing a game and I want to offer my free users, the lowest latency I possibly can, as long as someone else is willing to pay more for it. And those are very different business decisions. So that's really interesting. How do you think about like different Capabilities of the hardware.

[00:17:41] Cause it sounds like, you know, in addition to deploying your own hardware, you also, as you say, will consume and federate other people's hardware. I mean, what if I want, or need a GPU? I mean how are you thinking about those kinds of constraints

[00:17:54] Lee: [00:17:54] exactly. As you say, I mean, the interesting thing with GPU is this [00:18:00] one-to-one mapping.

[00:18:01] So having. A machine with lots of GPU's really far out on the edge where power and cooling are a huge constraint. Is that always going to be possible? Perhaps not. Is it possible, slightly, further away? Probably. And so actually it comes exactly down to how you describe it depends on the hardware, you know, not every location is going to be possible to have GPS.

[00:18:21] And so this is going to be something that we have to surface to the user of. Hey, you wanted five milliseconds in the city of London, or we can give you seven because actually we don't have a consistent footprint of GPS and that's something you're after

[00:18:34] Matt: [00:18:34] which actually puts you in kind of a cool position.

[00:18:35] Because if you get enough users, you can go back to your network or hardware partners and say, last week we got 85,000 requests for GPS in your geography. Maybe you should deploy some, and here's the checks we'd be sending you or something like that. It's, that's a fascinating. That's a fascinating how this is going to gonna evolve.

[00:18:54] Okay. So let's continue to talk about the developer experience. So I build these containers and I [00:19:00] assume I provide some manifest, which is a declarative, what I'd like. Is that reasonably correct? Okay. Some Yammel or Jason or whatever. And I say, this is what I'd like, and then you go figure it out. If you can run it the way I want, or maybe tell me, you can't give me exactly what I want, but you give me an alternative and I can, maybe there's some interactions there.

[00:19:17] What about communication across containers or service mesh or messaging? Like how do distributed truly distributed applications are really complicated. Are you doing anything to help solve that?

[00:19:29] Lee: [00:19:29] So as we build this OGE is what coding ori global edge. So we're building this Metro gradient, the Metro gradient will be connected by backbone.

[00:19:37] So we will have private connectivity so we can offer your container in London can talk to your container and Frankfurt. No problem. Wherever it comes more interesting is as you move further down the gradients, as we move it inside the telco network, that becomes more complicated. So it really depends on the relationship with the telcos to how we do this.

[00:19:56] Can we get MPLS connectivity from the telco, from every [00:20:00] location back to our Metro node, or do we now need to establish tunnels across the top of that network? And so what we're actually looking at is multiple different deployment scenarios where we can deploy and have MPLS we can deploy and have physical connectivity we can deploy, and we can establish a kind of.

[00:20:16] I guess you would call it a BPM measure at the top. And so we can allow your container to talk securely to a container in another location or all the way back to the public cloud, if that's where the destination of the information needs to be.

[00:20:29] Matt: [00:20:29] Yeah. And that reaffirms one of our thesis that.

[00:20:32] Edge computing as much a networking problem as it is a, you know, where's the compute problem. A hundred percent. Yeah. That's really interesting. What about state? Are you doing anything to help distribute state and make it available when I need it? Or do I have to figure that out too?

[00:20:46] Lee: [00:20:46] It's a super interesting problem to solve.

[00:20:48]They could base on the edge is something that myself and colleagues talk about a lot. We've been talking about use cases with third parties, also about distributing databases to the edge. Clearly. [00:21:00] Database at the edge is super interesting things like IOT, state of that is, is really interesting. Is that left at the cloud provider or is that left to the application developer?

[00:21:10] If we would offer some kind of database as a service in the future, perhaps that would be taken care of by that service right now, what we're looking at is how do we provide persistent storage so that you can have. A container or a VM spin up an, a location and it has some kind of multiple storage available to it States a very interesting question.

[00:21:30] And I think it really depends on the use case on how the application developer wants to maintain States.

[00:21:35] Matt: [00:21:35] Yeah. And I realized some of these questions are unfair and I'm just looking for how, cause they're not solved really. I'm interested in how you're thinking about and approaching it. And you know, I interviewed the CEO of macrometa and they're building a company.

[00:21:48] Specifically around distributing state through a stateful distributed database. And it's a really hard problem to solve and, you know, maybe there'll be successful because it is such a hard problem to solve. And you might just want to license something like that rather than trying to [00:22:00] build it yourself and offer your customers.

[00:22:01] So, yeah, it's a really interesting world, the kind of new kinds of, you know, middleware and you know, under structure that's being made available to developers. It resembles. What's being made to developers today, but it's, it all has this kind of edge flavor to it. But what are some of the most interesting applications you're seeing people show interest in deploying on your network

[00:22:22] Lee: [00:22:22] today?

[00:22:23] I think the most advanced ones that we see today are the game developers and game companies. I mean, they used to, they used for running games and servers already. And so distributing those VMs even further as. There's something that's very interesting. What becomes more complex. And those use cases is how do you deal with a multiplayer game where you and I are playing against each other, but we're thousands of miles apart.

[00:22:48] Where do we whom the game do we both hope to something local? And then we deal with something else across the network or is it fully distributed? And that's where some of our gaming partners are in that space and solving some of those [00:23:00] problems. They need an infrastructure provider to provide them with.

[00:23:03] The smarts to be able to do the distribution and to also allow them to have infrastructure, to deploy their kind of virtual machine infrastructure on. But you know, some of those use cases are being sold by that database as a service on the edge for IOT is something that's being talked about a lot.

[00:23:19] And it's a very interesting, you know, and back to your question about how do we deal with state, some of those, you know as you mentioned, some people are solving that today. I think that's something that's very easy. If you're AWS and you have a huge data center somewhere in Ashburn, that's got tens of thousands of machines.

[00:23:36] You got a huge network. That's, you know, it's very easy to do that. But like you say, when you distribute them across the globe, that's quite hard.

[00:23:44] Matt: [00:23:44] Yeah. I mean, you know, there are legendary stories about how Amazon does this internally and these atomic clocks to synchronize database. Right? So it is not, it is a, it's a problem that, you know, physics gets involved like the decay of a cesium atom affects [00:24:00] whether the database is accurate or not, which I just, you know, it's kinda mindblowing.

[00:24:04]Let's switch gears a little. Let's talk about mobile. Because they're, I mean, the first, there seems to be a convergence between the edge community and the 5g community, because, you know, 5g is going to require edge computing by definition, right? Virtualize networks require compute, and they're gonna have to sit in databases out on the edge and data centers out on the edge.

[00:24:22] But also it enables a whole new series of potential edge use cases because of the latency and the ability, support, and billions of devices and all this. So, but mobile things are in motion. So how are you thinking of, or how are your customers thinking either one or both of, you know the internet of moving things.

[00:24:42] Meaning if I need to maintain a certain latency and my target is moving, I need to know about it. And I may need to redirect traffic to another note or start a new container somewhere. That's closer. How are you thinking of that? Her customers think about, and what role do you think Ori's is going to [00:25:00] play in that solution?

[00:25:01] Lee: [00:25:01] So, yeah this is one of the, I think Nirvana things for us all to be thinking about solving. Ultimately it's the mobility thing. As we know, 5g brings all of this fun compute or extra compute that we need, but also brings with it the challenge of exactly this mobility. Having city level aggregation is going to be really interesting.

[00:25:21] So does that give you enough latency for application? Can we serve you as you move around the city of London? Can we serve you from one place? Maybe there are a number of use cases where that's possible. Do we now need to have this kind of intelligent workload placement that understands the topology of the network?

[00:25:38] Maybe it's. Reading information from the mobile network with an API or something similar that understands, okay, this is Lee he's on his phone. He's moving around the city. Here's the towers that he's connected to. And these are actually the closest pieces of infrastructure  

Matt: So one of your stated strategies is to partner with telco operators and bring your capabilities to their network. But also if they have cloud machines that you can utilize, that you would federate them.

[00:26:31] So at T tell me how. And what that looks like and how that works.

[00:26:36] Lee: [00:26:36] Yeah, sure. So imagine you're a telco network. You're you have a number of enterprise customers in you. You would love to have an edge or a cloud offering, and we can help you build a kind of offering either on top of something you already have because you have VMware or you have OpenStack or you have something similar.

[00:26:52] In your kind of central data centers, because it's running other applications for your business. We can give you the ability to have multitenancy [00:27:00] on top of that, to be able to build your customers, to provide them with containers of service and DNS and all these other funds services that they're going to require.

[00:27:08] Either on top of existing infrastructure or we can help you build something so we can help you build from, you know, bare metal all the way up to being able to serve users with containers. Part of that also then is you can federate with G our global offering to then be able to sell to your customers a global edge offering.

[00:27:26] So once we're in motel coast and your local geo, so imagine, you know, four or five, the top networks in the UK. Plus a whole bunch of other networks. If you're now an enterprise in your brain in five different countries, you can go to your local telco now and have an enterprise agreement with them and buy edge cloud services and multiple regions.

[00:27:44] That's the, kind of the aim of where this is going. So it's about helping the telco build something inside their network. And the by-product of that is we federate that with our global network and we are able to expand our footprint. So telco a can sell. So [00:28:00] the customer on telco B's infrastructure and vice versa.

[00:28:03] Matt: [00:28:03] Yeah, that's interesting. Do you see a world where you may even federate into your global network? Some other cloud provider they're servers. Oh, a hundred percent. A hundred percent. Yeah. That's interesting. Yeah. I thought man. Cause I, you know, I w in a prior life, I was the CMO of Heroku and. At its simplest level, we were reselling Amazon.

[00:28:22] Now we got Amazon a really good price and rattling our value. So there was profit there. But yeah we essentially federated Amazon servers all day long and that was a much better model than us trying to play our own servers. So that's okay. So tell me the different ways that as a developer, I could relate the Ori global edge to my existing cloud

[00:28:40] experience.

[00:28:41] Lee: [00:28:41] Yeah, sure. So I think this is back to your hurricane example. I don't want to go and build a hundred megawatt deployment in any city in the world. That just seems crazy to me. And it's not what the edge is all about. Amazon do a really good job of that. So the Microsoft, and so the Google, why would we try and compete in that business?

[00:29:00] [00:29:00] So I think what we're looking to try and do here is you're a developer, you're running your application inside one of these hyperscalers. Now you want closeness to the user and that's the thing that's important with edge. That's where we can come in. We can connect into your. VPC or wherever else you're running and help you extract traffic from the, into our network so that you can have, you know an API on the edge talking back to the database, inside your hyperscale environment.

[00:29:25] And that's kind of where it's at and where edge and cloud kind of compliment each other more than they compete with each other. Really.

[00:29:32] Matt: [00:29:32] What are the biggest challenges that are not yet solved in the technical cause I realize a lot of business challenges. Aren't at Savo, where are some of the biggest technical challenges and not necessarily for Henri, but I think to make edge computing work the way we imagine it working.

[00:29:45]What are the biggest challenges that are seen? Harry?

[00:29:48] Lee: [00:29:48] I think the networking challenges. Is one of the biggest ones. You know, a lot of this doesn't exist today. You know, as we've talked about earlier, lots of things are back called to big metropolitan areas that doesn't really [00:30:00] lend itself to cloud.

[00:30:00] And so there's a lot of this as, I guess, semi hinged on 5g being a big thing and really being rolled out

[00:30:08] Matt: [00:30:08] widely. It's just dependent on the wireless networks. Why can't we just make most of those happen with the wired networks? I think

[00:30:14] Lee: [00:30:14] it's combination of both. I mean, what's the big driver. Get things off radio onto the IP network, quickly break out users, distribute the net.

[00:30:24] All of those kinds of fun challenges are some of the things driving 5g to be distributed the way it's being you know, being a completely kind of software approach rather than lots of racks, full of radio equipment, et cetera, the fixed networks to the always see the same problem. Not necessarily, I mean, Lots of fixed networks have been doing known that for years.

[00:30:44]Can we do that in the mobile space where there are far more devices? No. IPV six helps with a lot of those kinds of things, but the networking. There's never really been the desire to do the breakout locally like that. Not or not in every market, at least. So the mobile and [00:31:00] the fixed being somewhat more converged because of the, you know, the additional fiber assets that is needed for 5g so that we can hold our own and all this kind of traffic, they kind of become complimentary to each other in some ways.

[00:31:11] Matt: [00:31:11] Yeah. You know, there's a, I had a conversation the other day where this was a cranky man, Augie engineer. He's like, look, if you really look at a map in the internet backbone, it emerges with concentration. Only a few places in the planet, really like in United States, it's like 20 different locations there and that you really can get on the internet and.

[00:31:31] And only about eight or nine cities are considered like hefty, you know, like Los Angeles and Miami and, you know, and you get down to like Boston and you're like, ah, I'm not even sure that counts. And you get to a city like St. Louis and there's no internet exchange. You have to back haul to Chicago or something like you just can't get on the internet in.

[00:31:49] Like you can't like you can't get on the internet in the Cotswolds. It has to go to London before it gets on the internet. And what he was describing is that there's like this internet backbone, we've built that, you know, this amazing thing that we've kind of [00:32:00] cobbled together from a bunch of different networks that emerged independently and then had an agreement to like share bits and BGP routes and stuff.

[00:32:08] And it's almost like what we need to do collectively is just push the backbone out. Push it out to more places where you've got more exchange points that get you onto the internet in more places, or keep you off the internet because it's just stays local. You know, it's just a local exchange. Do you agree with that basic characterization?

[00:32:25] Lee: [00:32:25] No, I really do. It's a chicken and egg problem though, because as we were talking about earlier, these hyperscalers are in what a hundred locations, sub 100 locations globally. If you draw a map of where all this interconnection happens, guess what looks very similar. And so it's this thing of, where's all the content, where do we need to build a network to hold that content to and from until there's more content locally, do we just need huge highways between large cities or do we actually need to be breaking out locally?

[00:32:55] It clearly has lots of advantages breaking out locally. Do I need to hold [00:33:00] that traffic hundreds of miles or can I hand it off to the next carrier somewhere local?

[00:33:04] Matt: [00:33:04] And right. Comes back to a business question as well, which is okay. We, I think we can all imagine what the future might look like, where these points exist, these points of presence and exchange points that are farther out in the network.

[00:33:16] We feel they're going to come. It's not about not a matter of F. Unless the government or the companies just grind to it, you know, and get into some deadly embrace and just can't get past it. But generally it'll probably happen. The question is when and whose money and he's like, like vapor IO are investing ahead of the curve.

[00:33:32] Like we're, we believe that. By creating this capability, we will enable developers to figure out how to use it and they will figure out how to use it. You know, there's a certain bet involved. You look at like the iPhone, like, right. So the iPhone was a compelling consumer device on its own, but really the power of the iPhone is that it enabled things like Uber, right?

[00:33:53] I mean, Uber wouldn't happened unless AT&T had built out a nationwide LTE network to support the [00:34:00] iPhone. It wouldn't have worked if Google and Apple hadn't mapped. You know, the entire world, it wouldn't have worked. If the phone didn't have GPS in it. I mean, phones don't have GPS or they didn't. And so there had to be all this forward deployed infrastructure.

[00:34:13] In fact, you just see the Apple announcements that just came out for the iPhone, I guess the 12 where they have all these AR features. Well, AR is like a. It's a niche, right? So Apple said, we're going to put a lot of expensive AR stuff in this phone because we believe it's going to enable a new class of applications that are going to differentiate our phones and sort of, and get in the herb.

[00:34:35] So part of this is like you said, you know your business could grow much more quickly. I imagine to the degree that it's not your capital expenditure, that's putting servers in the field, but somebody else's that you're figuring out. A business relationship to help them sell that capacity. I guess it's not really question in there, but I'm wondering  if you wanted to add to that,

[00:34:53]Lee: [00:34:53] I think it's a really interesting topic.

[00:34:55] And the forward deployed thing is really interesting. The networks are seeing far more [00:35:00] desegregation and equipment now, which is really enabling this, you know, we're not now installing what we all like to call God boxes. You know, we're not installing two racks of Juniper equipment. We can now install the segregated, you know, Protocol chips.

[00:35:14] That's much cheaper. All of those kinds of things. The interesting piece, and we've talked about this a little bit is this is really a business problem. What's if I'm a telco, what's in it for me to hand off that traffic locally, maybe I now think there's a premium in that the CDNs clearly don't think that telcos should be charging a premium for that.

[00:35:32] And the CDNs think that helping the telcos. So you have this chicken and egg thing, and I've been on the CDN side of that for a long time. And it's. It's hard from the CDN side to have a telco telling you, Hey, we don't make any money. This isn't really hard business these days. We've we think we need to charge you a premium so that you can get closer to our users.

[00:35:51] There needs to be a little bit of give on both sides, probably in order for that to actually work out because otherwise what's the business driver for actually outlaying all that money [00:36:00] to build the network, to break out the users. But then for the CDNs. Why should they pay over the odds to serve users locally, which also saves the telcos money and back whole.

[00:36:09]It's a very interesting business thing that I think we would need a lot more beers to to really ground up.

[00:36:15] Matt: [00:36:15] Well then let me turn that into an interesting personal question, which is if you consider like your most valuable asset from a work perspective is your time. Right. Like Lee Hetherington his time.

[00:36:28] And you had a good career. I mean, working at Amazon, working at Facebook, doing this really interesting stuff, but you decided to join a startup company. So you're for deploying your talent to some extent. Okay. Walk me through your thought process. Like, why is it why are you for deploying your talent?

[00:36:43] What is it about this that has, you know, it's made it worth it to you personally.

[00:36:47] Lee: [00:36:47] So. I think we all go through our careers on it's very rare to find a Greenfield opportunity. That's a legit Greenfield. Ori represented a huge Greenfield. For me [00:37:00] personally, there was a desire to build an infrastructure, ori  had already had a great headstart and building software, but hadn't actually built any infrastructure muscle.

[00:37:08] And so for me to come and take all of my learnings from things I've been doing over the last few years, and to build a team, to come and build an infrastructure that sits underneath this. This great software layer was a really interesting opportunity. As I said earlier, working Facebook on edge really got me bitten by the bug of how do we get the best user performance?

[00:37:28] How do you really distribute this thing? How do we serve billions of users? It's a super interesting challenge. And actually at Greenfield, I just. Didn't feel like I could pass up. Mahdi talked about me joining to build this infrastructure for quite a while. He's a very persuasive guy, I guess, as a yes, he is very charismatic.

[00:37:48] Exactly. I think that's how that ended though. And you know, we were here have been Ori almost six months and we were building really.

[00:37:55] Matt: [00:37:55] So he basically dragged you into his reality distortion field and here you are. So I think that's how we can [00:38:00] describe it. That'd be me too. Right. Well, welcome to the club, but you know, there's, there is some seriousness and, and Mahdi and I talked about this a little bit.

[00:38:08] In fact, you and I talked about it before this podcast started, which is. You know, Greenfield, 10 years ago, you were a mobile developer. You were at first considered a kind of a fringe person, and then you've got rich, right. Or when high demand or, you know, had the best work and all of this. And then five to seven years later, it was, you know, Cloud native developers.

[00:38:31] And you, if you a, you know, a bad-ass Docker programmer and could use Kubernetes, right. And that's still the case, but it's not a Greenfield at anymore. Right. And so the new Greenfield is edge computing. Do you have any advice for, you know, I guess anybody, but I was thinking developers in particular to me, not as developers, maybe infrastructure.

[00:38:49] Well, but anybody that's looking to do what you did, which is to punch the ticket of a Greenfield opportunity. You have any advice on how they should. Get into edge computing and when they should get into

[00:38:59] Lee: [00:38:59] it, [00:39:00] I think we're in this really interesting position where, you know, people asked me before I joined Ori, like what, how are you doing?

[00:39:05] Like, this is crazy, this edge computing thing, is it chicken? Or is it, Hey, cloak other really legit use cases that are actually going to drive this thing. And I think that's where if you're a software developer, that's where the envelope can be really pushed. I mean, We're no longer building monolithic applications.

[00:39:20] Things need to be more distributed. You know, we all coined the phrase, microservices and all the other things that go along with it. That's really where it's at. You know, building big things that live in clouds and use database as a service and all these other things. That's great, but actually to create a great user experience and to do all of these AR and VR things that people have been talking about for awhile now.

[00:39:43] We really need to be embracing things like edge and you know it's not going to be perfect by any means in the short term, but, you know, I think as a collective, we have a kind of, we all have a desire to be able to solve some of those use cases. And it's really interesting. I think that [00:40:00] developers pushing the boundaries of what we can do with this kind of infrastructure is going to be really interesting, but it's starting to think of a very desegregated, very small infrastructure that.

[00:40:10] There's lots of it rather than, you know, one big concentration.

[00:40:14] Matt: [00:40:14] So tell me in your mind, you know, what are some of the big upcoming milestones in edge computing? Generally? I mean, we talked about the network and we're hoping that'll happen. Are there any others that you're sort of looking out in the future and you know, either hoping you're going to happen quickly or that you feel like, you know you can nudge and make happen.

[00:40:31] Are there any other. Like milestones you're looking

[00:40:32] Lee: [00:40:32] at, I think power consumption is a big one. Yeah. Tell me about that. What does Y I think that, you know, you go further out to the edge, your in less than desirable locations. I mean, you guys are building, what's probably fairly fancy in some compared to some markets and what's available, you know, you're talking a local telephone exchange where.

[00:40:54] 35 degrees Celsius in the afternoon is probably normal.

[00:40:57] Matt: [00:40:57] And there's three inches of water on the floor.

[00:41:00] [00:41:00] Lee: [00:41:00] Exactly. This can I kinda, you know, go to dell.com and buy a server. That's capable to go in that environment today? Probably not. Yeah. There needs to be more advancements in hardware. We need to be seeing.

[00:41:11] Processes that don't take quite as much power, those kinds of things so that we can cram as much as we can into a small footprint that isn't going to break the bank for the telco that's providing the power and the space.

[00:41:22] Matt: [00:41:22] Yeah. It actually probably means some good things for arm. And I guess Nvidia now because that, that has been arms approach to data centers has been, you know, w we're gonna, we're gonna try to differentiate on power consumption among other things.

[00:41:35] That's really just

[00:41:35] Lee: [00:41:35] anything else. Software clearly is going to be a big one. I mean, we talked about Kubernetes earlier that really pushed the boundaries of where things can go and expanding on that so that we can deal with desegregated infrastructure is going to be the really interesting one. You know, we talked about 40,000 servers in a cluster in one location that's kind of easy to do, right.

[00:41:57] Building that across the globe. Not so much, [00:42:00] especially when you start to think about edge infrastructure in countries where power's maybe not. Not as good as it could be, you know, parts of Africa and those kinds of things really well edge is going to create most of the difference. Infrastructure can go away as quickly as it can come back.

[00:42:16] And so building applications on those kinds of things that can cope with that infrastructure disappearing, it's going to be

[00:42:22]Matt: [00:42:22] Really important. Is there any location in the globe in country or concentration of geography where you feel like edge computing is more advanced?

[00:42:31] Lee: [00:42:31] I don't think so.

[00:42:32] I think that, you know, the concentration of where users are is where the demand is growing. You know, the us and Europe, very advanced parts of Asia. In fact, in the middle East are embracing 5g more than some other places. And so actually desegregation and those kinds of things happen. Potentially quicker and some of those markets.

[00:42:51] And so we could see an introduction in some of those markets faster than we may and others. Yeah.

[00:42:57] Matt: [00:42:57] That, that does make sense. You know, again, back to this general [00:43:00] principle of what's the business thing that draws that polls, right? There's lots of us that are pushing for deploying what's the poll.

[00:43:06] And deploying a 5g network. I mean the, yeah, you need to have data centers or you need to have a different kind of equipment that operates in more harsh environments or combination of both probably is what's going to end up happening and the enablement of a 5g network. If you're using virtualized network functions means you have to have.

[00:43:26] Probably white box servers that are out in the field. And if you got to put foreign to drive your radio network, you might as well put another foreign to partner with or industries to federate. So I can see that happening. I can see some of these countries that are that where the governments are for deploying.

[00:43:44] A lot of the technology, which might even happen in our countries, it might happen in the UK and it might happen the United States, you know, you keep seeing that. That's a really that's an interesting, yeah. But I feel like we're approaching a tipping point and I can't quite put my finger on it, but it feels like there's a combination of the right elements of push and pull [00:44:00] required to make this converge quickly.

[00:44:02] And I actually think Covid helped. I mean, I mean, yeah. I mean, it's a lots of bad things are coping and I don't mean to make light of it, but I think that, you know, a lot of people I've talked to the general thought is like every company has had an automation strategy. You know, if you grow something, move something, sell something, build something, you have an automation strategy.

[00:44:21] You have a 10 year automation strategy, and now you just compressed it to three years. Because if there's another pandemic, you want to be able to survive in a much different way than people have kind of limped along longer in this pandemic. And so automation is a big thing of that. And then once you're an automation, you're like, well, okay, I got to run AI workloads in robotic workloads.

[00:44:39] I need low latency. I could put it on prem, but is that really the right decision? Right? Could I run it from the cloud? You know, Yeah, there's a world where I've got my high speed, robotic workloads running on an, in an Ori global edge environment potentially, or some portion of it. You know, even if I want to have some, you know, safety loops on the device [00:45:00] or in the factory or whatever, it just makes so much sense.

[00:45:01] Like why would I drop a data center in my parking lot or my farm field? If there's one. A millisecond away.

[00:45:08] Lee: [00:45:08] Exactly. And you look at some of the things that, that COVID has brought, obviously increased internet usage. You know, lots of ISP have talked about various things and their networks, it hasn't affected some it's really affected others.

[00:45:21] You saw announcements from people like Netflix who reduced the bit rate of that video so that they could reduce congestion at peak times, you know, this is affecting not just the backbone networks, also the access networks, you know, if we were moving workloads closer to the user, would that be as much of a problem as it is today?

[00:45:39] You know, we wouldn't have as much traffic traverse in the core of the network, potentially, which. Can only have better, better outcomes for the users and allow us to consume richer services, et cetera, total

[00:45:51] Matt: [00:45:51] change of topic. But it made me think of this because it's one of the accelerants that I think is helping me add, you know, you mentioned that you're building your platform on Kubernetes.

[00:46:00] [00:46:00] What is your thoughts about open source in two dimensions? One is as a way to accelerate edge computing generally, but also from a corporate perspective, both your use of open source, your contributions to open source and. Anything you're building that you are thinking about open sourcing. Yeah. Sure.

[00:46:17] Lee: [00:46:17] So we embrace open source, of course, as we've talked about capabilities various other components also We do have a strategy around open-source we've contributed a couple of things back already.

[00:46:28]I wouldn't say that we've contributed any of our kind of, you know, quote unquote secret sauce back quite yet, but there's always time for some of those components. What it's about is driving an ecosystem. It's not just about how does Ari build this great closed source thing, demands, lots of people to come and spend lots of money with us.

[00:46:45] It's about how do we help these developers build applications, which work on edge? So things like our SDK would be available to people off to developers to use when building their applications. We built a a DNS plugin with core [00:47:00] DNS as part of, Coobernetti's, which we've open source. Then

[00:47:03] Matt: [00:47:03] what does it, what does that do?

[00:47:04]What is the, how does DNS plugin help? Works

[00:47:07] Lee: [00:47:07] alongside the ingress control of Kubernetes, so that we can start to publish. The NSR goes out to the internet with authority of name server, essentially we've open source. This one of our software slash network engineers wrote this code.

[00:47:20] It's something we use ourselves and it's something we wanted to give back. Of course, as we find bugs and fix things, we're obviously contributing those back also.

[00:47:28] Matt: [00:47:28] Yeah, it seems to me that one of the. One of the most important accelerants of edge. And I think a 5g also is what I call shared infrastructure, but that's just because I come from infrastructure.

[00:47:41] Right. And you know, you look at the telcos, they started this a long time ago with the cell towers, at least in a lot of countries, like certainly in the U S right. The cell companies divested themselves and most of their cell tower assets. Which are now owned by companies like American tower and SBA and crown castle.

[00:47:56] And, you know, that just like cleaned up their balance sheet and may [00:48:00] somebody else responsible for an asset that they could then lease two, three or four cell companies as opposed to just one. And I think that, you know, in the same way that like my data centers are multi-tenant, so I build a data center and if you are a customer, mind, you.

[00:48:17] You amortize your costs. I amortize my cost across all my customers. And so a lot of the heavy lifting is paid for by other people in addition to you. But source is kind of the same thing. It is a, it's a it's software infrastructure that is shared. And. And there's real cost involved. I mean, real people, I mean, you paid your network engineer to write that code and clean it up to the point where you felt comfortable.

[00:48:41]Upstreaming it. Do you have any experience in your past and working with shared infrastructure and like how that may or not have helped in accelerating some edge deployments.

[00:48:54] Lee: [00:48:54] Yeah. I think if you think back to, you know we talk about the vapor solution [00:49:00] internet exchange points are a kind of a very similar thing.

[00:49:02] You know, we have a shared switching fabric, many people come connect and you get all the good stuff. You get all the good stuff of have been connected to the shared fabric. Right? You have. You know, I'm a non-executive director of the London internet change, which is one of the largest exchanges based in London or some of the instructors based in London.

[00:49:20] We have networks from all over the world, connecting there, they come to one place, they can plug a cable into a switching infrastructure and have access to based on the agreements. They have access to all of the other participants on the exchange that. That was a huge thing for the internet. Being able to do that, you know the old teal monopolies, where you used to have to pay a carrier to transmit your traffic.

[00:49:42] Now you can connect to one of these exchanges and peer with other participants that had huge benefits to the internet and made it far more robust.

[00:49:50] Matt: [00:49:50] Yeah. You know, I'm using this little bespoke example, but you're right. The entire internet is built on shared infrastructure to a very large extent.

[00:49:59]I [00:50:00] mean, even look  at, you know, the Amazon, right? Like that, that shared infrastructure. Right. And there's just different economics. Yeah. It's interesting how the business models  are changing. How. How technology is deployed and how it accelerates. It's a really fascinating topic.

[00:50:17]So I've a couple of last questions that I'd like to ask you. So the first one is if people want to find out more about Ori industries, where should

[00:50:26] Lee: [00:50:26] they go? So we're on the internet, of course Ori.co and

Matt: Are you guys hiring? If I'm excited to get into edge, can I look some of the jobs that you guys have

[00:51:25] Lee: [00:51:25] ori.co/about and ori.co/careers. I'm particularly looking for software developers for my team right now. And so it's it's really interesting time to be talking about this.

[00:51:36] Matt: [00:51:36] Yeah, that's great. And then finally Lee if people want to find you on the internet do you hang out on any social networks, LinkedIn or is there any place they can go to, to find your, so

[00:51:47] Lee: [00:51:47] LinkedIn I'm also on Twitter. Interestingly, have the handle @edgenative, which is a quite good one. Well done.

[00:51:58] Matt: [00:51:58] Yeah. Okay. [00:52:00] Thank you, Lee. I've really enjoyed this conversation. It was fun. Get deep into the nuts and bolts. And I look forward to watching Ori's success.

[00:52:09] Lee: [00:52:09] That's great. Thanks