Over The Edge

From Kubernetes to the Edge with Craig McLuckie, VP of Product at VMware

Episode Summary

Today’s episode features an interview that took place earlier this month at Kubecon’s Kubernetes on Edge virtual conference between Matt Trifiro and Craig McLuckie. In this interview, Craig discusses the Kubernetes origin story, his current work in the Modern Application Platform business unit at VMware, and why he says Edge will be a highly disruptive area of innovation.

Episode Notes

Today’s episode features an interview that took place earlier this month at Kubecon’s Kubernetes on Edge virtual conference between Matt Trifiro and Craig McLuckie.

As a co-founder of the Kubernetes project and co-creator of the Cloud Native Computing Foundation, Craig is a modern-day legend in the space. He left Google in 2016 to found Heptio, and currently serves as VP of Product Management at VMware.

In this interview, Craig discusses the Kubernetes origin story, his current work in the Modern Application Platform business unit at VMware, and why he says Edge will be a highly disruptive area of innovation.

Key Quotes: 

“From a futures perspective, it's all about the edge. This is where I see the most excitement…I think it's going to be a huge growth area and a highly disruptive area of innovation over the coming years.”

“To succeed in a startup, you really need to look for a moment of disruption where the set of incumbents are not able to move as quickly as they might like; where there's a high total addressable market…that's what you can create a successful business out of.”

“There's no substitute for culture. I think if you can establish a very effective cultural bar, if you can design your culture to the problem at hand, and if you hold yourselves to a very high standard, it becomes self-perpetuating…[It starts] with being very, very deliberate about the cultural roots.”

“We are an organization that is in service of the community and in service of our customers, and what we build is honest technology. So we stand behind the way that we build, and we stand behind what we build. We take a great degree of pride and delight in creating honest technology.”

Sponsors

Over the Edge is brought to you by the generous sponsorship of Catchpoint, NetFoundry, Ori Industries, Packet, Seagate, Vapor IO, and Zenlayer.

The featured sponsor of this episode of Over the Edge is NetFoundry. What do IoT apps, edge compute and edge data centers have in common?  They need simple, secure networking.  Unfortunately, SD-WAN and VPN are square pegs in round holes.  NetFoundry solves the headache, providing software-only, zero trust networking, embeddable in any device or app. Go to NetFoundry.io to learn more.

Links

Connect with Matt on LinkedIn

Follow Craig on Twitter

Episode Transcription

[00:00:00] Matt: [00:00:00] Hi, this is Matt Trifiro CMO of edge infrastructure company, vapor IO, and co-chair of the Linux foundation's state of the edge project. I'm the host of over the edge, a weekly hour long interview style podcast on edge computing and the future of the internet. You can find it @ overtotheedgepodcast.com and on all the major podcasting platforms and choosing iTunes and Spotify.

[00:00:20] Today, we're coming to you live from the Kubernetes on edge virtual conference, and I'm thrilled to be joined by Craig McLuckie currently VP of product. At the modern application platform, business unit at VMware, but also one of the co-founders of the Kubernetes project, the driving force behind the formation of the CNCF and the former CEO.

[00:00:38] And co-founder of Heptio along with Joe beta, we're going to talk to Craig about his career in technology, including the origins of Kubernetes. We're also going to cover the past, present and future of all things Kubernetes and edge. Hey, Craig, how are you doing today?

[00:00:50] Craig: [00:00:50] I'm doing really well, man. Thanks for having me.

[00:00:52] Matt: [00:00:52] This is terrific. I've been wanting to do this for a long time. You and I have been friends for a while, but we've never been in a, I've never been in an environment like [00:01:00] this. So this is really, really kind of fun to do this. How did you even get started in technology? It's funny.

[00:01:05] Craig: [00:01:05] Like I was kind of thinking about that.

[00:01:07] I think the answer, it was the Commodore 64. That's what so many kids in my generation, um, you know, as a sort of nerdy teenager, I discovered that programming was a great kind of creative outlet for me. And so I spent most of the years from between 10 and 17, I guess, coding. I know it took a little break from it.

[00:01:24] Believe it or not you're in college actually decided to pursue a different direction and actually engineering, which in South Africa, mint. Real electricity. I don't think I'm not sure that Microsoft really understood that when they interviewed me, but, uh, and actually do a lot of formal CS, uh, coursework, but it's close enough that, uh, enabled me to get a job at Microsoft and the rest has been a great year.

[00:01:45] Matt: [00:01:45] Yeah, that's great. And so when did you leave Microsoft and go to Google?

[00:01:50] Craig: [00:01:50] So I left Microsoft. I was back in about roughly 2009, 2010, somewhere around there. My first project at Google was I kind of lucked out. [00:02:00] There was this thing called it became Google compute engine. It was called big cluster at the time, but that's where I met Joe beta.

[00:02:05] And a sort of starting point was really just working on bringing traditional enterprise VMs into the, uh, Google data center. And it was a, it was a really fun project. I learned a lot about what about Google's infrastructure? A lot about some of the challenges of actually bringing those enterprise grade workloads into that sort of cloud environment.

[00:02:22] And that really set me up on the journey that I've been on ever since.

[00:02:25] Matt: [00:02:25] How did you go from compute engine to Kubernetes?

[00:02:28] Craig: [00:02:28] No. It's interesting story. Um, you know, Jonah, you know, poured our hearts and souls into building compute engine, and we thought like it was great technology. You know, it had a really clean, elegant API, had a lot of very favorable performance attributes, some really interesting networking capabilities, et cetera, et cetera.

[00:02:45] But. It was also kind of interesting because in some ways it was almost too little too late. You know, when we started the project, you know, Amazon had already opened a huge lead in the ecosystem and we were starting to see a really strong convergence of ISV and a lot of [00:03:00] other organizations around the, um, The Amazon ecosystem.

[00:03:03] So we stepped back a little bit and we're really thinking about like, well, what can we do to kind of change the change the game some, how can we be a little disruptive? We knew that was gonna take a while for Google to build up the strength and the go to market side of the house, you know, get that enterprise readiness that it needed to really compete, which I think they've done a fabulous job under Thomas Kurian, by the way.

[00:03:22] But, um, But it was also clear that if we didn't do something quite disruptive, we would, you know, have a really hard time competing all the time. And that, that really motivated me to think outside the box a little bit, and look at other options.

[00:03:35] Matt: [00:03:35] When we met, I was the COO of message sphere. And what I think is really interesting about the CNCF and Kubernetes origin story is that you actually embraced these alternative technologies.

[00:03:48] Can you tell me a little bit about. What drove that thinking and how that became part of the foundational ethos of the CNCF.

[00:03:56] Craig: [00:03:56] You know, I remember this one moment where Joe and I were [00:04:00] kind of working on compute engine and it was, it was really kind of sad because I think the world had, had an opportunity to normalize on something like the virtual machine image definition, as, as something that could be relatively ubiquitous, but that just never happened.

[00:04:14] And I remember turning to Joe while we would kind of work on the project. I just had this sort of like moment where I was like, you know, whoever solves the. Problem associated with packaging and deploying software atomic kind of like the way we do with Borg is ultimately going to. Just had this amazing sort of, you know, runner that they're going to be able to be quite disruptive to the industry, but we were busy on, on compute engine.

[00:04:34] So eventually, um, once we got to a point where, you know, compute engine was largely kind of ideated and was on rails, we started playing with some ideas and. One of the things that immediately caught our attention was Docker. It was funny because I remember someone mentioning this to me like, wait, wait, wait way back.

[00:04:51] When way before doc was even popular, like, Hey, you should really be paying attention to this. It's it's kind of a neat model. And when we started really getting stuck into it, it was, it was like one of those [00:05:00] moments where like, you know, wow, we should really have thought of this. Right? Like it's. It was such an elegant way to express a unit of, you know, um, deployment that the thing that was most elegant about it wasn't necessarily the technology, which was, you know, by Google's way of thinking somewhat mundane.

[00:05:15] It was really the experience that had been created around that recognition that, that Lennox school there was, um, was such a powerful obstruction. And that really got the wheels turning, you know, we started thinking about like, Well, what would this look like done a bride? And there were certainly some projects in the, in the, uh, in the ecosystem that were interesting, but more importantly, the way that, you know, my other friend, Brendan burns, who we were working with at the time described it as like, we kind of had the puzzle box when everyone was trying to figure out how to fit the pieces together.

[00:05:42] We'd seen how this would work at scale, and that really motivated us to. Do something new and it became obvious that we had to do this through the lens of the community. You know, once we, we just saw the disproportional level of attention and traction and engagement, Docker was [00:06:00] receiving, you know, versus the relative maturity of the technology.

[00:06:03] It was clear that if we could create something righteous, Meaning that had a lot of the sensibilities that we've come to take for granted at a place like Google, but we could do it in a way that brought the community with us. We could create something quite special. And I don't know which of the two it was that, um, you know, kind of originally came up with a crazy idea to, uh, To make a run at an open-source project, which became Kubernetes.

[00:06:25] Was he the Brendan? Well, Joe, I just don't remember, but they're like, Hey, what would it look like to do this? And I thought about a bit and we're like, yeah, like let's give it a go and see what happens. And you know, it did, it definitely worked out okay. In the end.

[00:06:37] Matt: [00:06:37] I think it did. I think, you know, one of the things I've really always enjoyed about you is how methodical you are in your strategic planning.

[00:06:44] You know, there's always some game theory equation going on in your head. And I think that is why CNCF ended up. Being the way it is. I think a large part of it had to do with just your conviction, that this was the proper way to change an industry and certainly hats. And I think we could say [00:07:00] Kubernetes has changed the industry and it is continuing to change the industry.

[00:07:04] So somewhere along the line, you left Google and you co-founded a little company called Heptio. Tell us a little bit about

[00:07:10] Craig: [00:07:10] that while I was at Google, I was working on, um, Yeah. So Kubernetes was on rails and I started to kind of play with some other ideas in the space. You know, what would the next abstraction up look like?

[00:07:20] How would we think about creating a services ecosystem on top of it? And I was working on a couple of things there. One of which was the, uh, Apogee acquisition. And I, I kind of got friendly with chick who was the kind of founding CEO of Apogee. And he said something to me, which kind of just stuck in my head.

[00:07:36] You know, he was like, you know, to succeed in a startup, you really need to look for a moment of disruption. Where, you know, the set of incumbents are, are not able to move as quickly as they might like where there's a high, uh, sort of total addressable market. And you're sort of in a situation where that's what you can create a successful business.

[00:07:54] And that's like, it kind of clicked for me. I was like, wow, we never going to see that. I'm never going to see these circumstances again, [00:08:00] but it was also. For me, um, an opportunity to really kind of step back a little bit and walk the journey with customers. You know, I I'd looked at this sort of circumstance where I was thinking about like, well, do I want to do another Kubernetes?

[00:08:15] Um, so I thought there would be another Coobernetti's in there somewhere, right? Like, you know, some of the ideas around that eventually evolved in things like service motion and some of those technologies, but. The really interesting opportunity for me personally, was the opportunity to engage with customers where they work to be an effective ambassador for this very rich open source community and bridge the gap between enterprise organizations that were looking to get more intrinsically aligned with upstream technology and the communities that we're supporting them.

[00:08:41] And so that's really what caused me to, to, to go out and do, to have to, and my friend, Joe, was, he had decided to, uh, retire, which was kind of hilarious cause he

[00:08:49] Matt: [00:08:49] loves working. I remember that, but I didn't believe him when he put, he tried, but

[00:08:53] Craig: [00:08:53] no, I dunno. Like eventually he was like, Hey, when are we doing a starter when we're doing a startup?

[00:08:57] And I just want to work with Joe again too. I'll be honest. [00:09:00] Like that was a, I've always enjoyed his perspective on things. And so that really motivated us and, uh, that one worked out okay too. So we were very pleased with the traction we made in a relatively small amount of time in terms of just helping some larger enterprise organizations start to make this journey towards cloud native technologies.

[00:09:17] Matt: [00:09:17] What was the most surprising thing that you learned on your journey at Heptio?

[00:09:21] Craig: [00:09:21] I don't know if this is surprising, but it's something that I would certainly took to heart as a. As a leader. When I look back on what we created and the impact that the team we brought together is having within my business unit and within VMware is there's no substitute for culture.

[00:09:38] I think if you can establish a, a very kind of effective cultural bar, if you can design your culture to the problem at hand, and if you hold yourselves to a very high standard, you will. It becomes self perpetuating. The quality of individuals we brought in have just done tremendous work within the parameters of the community, within the parameters of VM-ware.

[00:09:56] And I couldn't be more proud of just the people. And I think that really just started [00:10:00] with being very, very deliberate about the cultural roots. Great.

[00:10:02] Matt: [00:10:02] Over the pillars

[00:10:03] Craig: [00:10:03] of the culture. There were, there were three pillars of the culture that we established. The first was what I used to say was honest technology.

[00:10:10] You know, we are a organization that is in service of the community and in service of our customers. And what we build is honest technology. So. You know, we stand behind the way that we build. We stand behind what we build. We take a great degree of pride and delight in creating honest technology. The second kind of cultural element that I used to push a lot was carry the fire, like a real passion for disruption and authentic desire to create something that was bigger than, than anyone had seen.

[00:10:38] And this willingness to do the hard thing to walk the hard road when you have to. And then the third element that we put a lot of emphasis on was me before me, the idea that, you know, it's, uh, it's about the quality of the team. It's about the quality of the community. It's about doing that little bit of extra work so that someone else doesn't have to do it tomorrow.

[00:10:59] And [00:11:00] obviously the ocean of nuance associated with each of those three, those three elements, but just having that anchor of culture that was really understandable and manifest every day. It informed the decisions that we made informed, who we hired, informed how we interviewed. And I think that really set us up for success.

[00:11:17] Yeah.

[00:11:18] Matt: [00:11:18] And, uh, for those who don't know, it was probably two people in the audience Heptio was acquired by VMware and, uh, you know, back when you and I were working closely together, we used to get the question, right. VMs or containers, like, which is going to win. And, uh, I think we knew that the answer was, yes.

[00:11:36] And now I think definitively it's. Yes. But can you tell me a little bit how Heptio and I guess more importantly to the Kubernetes community Kubernetes has been integrated into VMware. And what is the VMware Tansu grid? I believe is the product made to kind of navigate us.

[00:11:54] Craig: [00:11:54] I see containers and VMs as being the fundamentally different technologies.

[00:12:00] [00:11:59] One solves a packaging and distribution problem. The other solves a hardware isolation abstraction problem. And we're certainly seeing. Often it's not a kind of yes. And not series of outcomes. I think, you know, almost every vendor now has some form of hypervisors, isolated, um, you know, Kubernetes or container abstraction.

[00:12:16] And, and we're certainly no exception to that, you know, for me when Pat approached us and we certainly weren't looking to sell, I, I, I, you know, like we were great. We're having a lot of fun, but. It was also clear that to have the impact that I wanted to have on the industry. We didn't need a bigger boat.

[00:12:32] That scene from jaws, where you see the size of the opportunity, see the size of the impact you can have. And you. You need that big boat. Right. And what I was so excited about was the opportunity to use the strengths that VMware was bringing to the table that incredibly trusted brand in enterprise computing and awareness of how to actually operationalize at scale and understanding that, you know, the first 80% of the work is, you know, getting to that 80% point is only 20% of the effort.

[00:12:59] Right. Like that [00:13:00] last 20% of, of enterprise technology is really hard. There's just enormous amount of effort associated with dealing with the edge cases and getting everything set up. And you know, to me, I saw this, uh, this impeccable opportunity to. Be a part of VMware becoming something more than just a virtualization company.

[00:13:17] Obviously VM-ware already was weld on the franchise road. Uh, had a lot of different kind of businesses, but being in a position where we could build out what I thought of as being a legitimate medic cloud, you know, something that would make on premises, public cloud, and increasingly the network edge.

[00:13:33] Look consistent was just incredibly exciting to me. And so as I've been on this, uh, on this journey, you know, I always think of, you know, the kind of in my head, I'm pretty simplistic. You know, one, I want to deliver a ubiquitous Coon, any substrate that's consistent everywhere too. That's not really interesting unless you have an effective control plan to manage it.

[00:13:52] And then three, I want to render up the software supply chain that enables developers to produce business outcomes and that destination [00:14:00] and. You know, through the initial integration, into, um, VMware, we just massively extended our reach to all of those existing, uh, facilities and became a really strong anchor for VMware's own navigation and migration to support public cloud computing.

[00:14:15] And then, you know, from a futurist perspective, you know, it's all about the edge. Like the, this is where. I see the most excitement, you know, for me personally, I think it's going to be a huge growth area in a highly disruptive area of innovation, you know, the coming years. And I just can't be, couldn't be happier to be a part of that journey at a company like VMware,

[00:14:34] Matt: [00:14:34] you brought a bed.

[00:14:34] Cause that was about to transition to that since this is the edge day. So when you say the opportunity to edge and I won't make you define edge, but I want to see what, like what, what are the opportunities that you see for Kubernetes?

[00:14:45] Craig: [00:14:45] Well, I mean, the whole point of Kubernetes was that it would be that Goldilocks abstraction that enables you to treat, um, you know, most infrastructure consistently.

[00:14:53] And the opportunity here is, is no different, you know, I think, you know, obviously there's a very broad array of [00:15:00] definitions for what edge computing is from, you know, thick edge to edge near edge for edge. However you want to kind of taxonomize it starting point I think is just having that normalized.

[00:15:12] Substrate that compute substrate, that you can then tie back to a control plan. So you can start to reason about your edge, whether it's, you know, geographically distributed, whether it's running in a variety of different policies, whatever the case may be as being a common destination and making that destination accessible to developers that are building physical outcomes is really interesting.

[00:15:33] We've seen so much excitement and engagement around things like reactive computing patterns we've seen. The emergence of CDN based capabilities, that's really addressing the kind of outward flow of, of developer assets, you know, to the edge device. The really exciting thing is what about the reverse?

[00:15:52] Like how do we start to kind of synthesize and process information that's being generated at the edge? How do we do that in a typology aware way that's making [00:16:00] appropriate use of what computational resources home there, you know, balancing computational. Consumption with network backbone consumption. And that to me is just, I could not be more excited about the opportunity to, to look to participate in that, in that part of the journey.

[00:16:16] So,

[00:16:16] Matt: [00:16:16] let me unpack that a little bit. So if you think about Kubernetes as being a over simplistically, but I think usefully a platform that will take a container and based on a declaration run at somewhere and in. A single data center. There's a set of, you know, common declarations that we might make, that the scheduler can interpret, figure out where those workloads run within that cluster in that data center.

[00:16:41] Do you see that metaphor translating or the 10,000 servers in my, my single data center that different from the 10,000 servers in the, you know, 500 data centers that are, that I might be using at the edge, those metaphors

[00:16:55] Craig: [00:16:55] translate, you think. I th I think they do to a certain extent, but you have to recognize [00:17:00] that it's sort of added complexity to the problem.

[00:17:03] Right? So the interesting challenge with, with edges, um, you know, one is, you're just dealing with the scale of operations within the logical construct of the data center. You can afford to have operating model that scales somewhat linearly with the number of potential clusters or pick your poison. I just started looking at edge.

[00:17:23] It changes the dynamics. You have to have something that scales pretty much the cost of operations. Just doesn't scale with the, the deployment topology. Otherwise things are going to get bad. You can't send an it operator to every inch locations to deal with something that you have to update. Whereas you could do that in the old days.

[00:17:39] And Kubernetes is just an incredibly important technology from that perspective, it introduces a determinism in terms of how you can reason about deploying something. So you can take that package software and run it that much like you could in a data center. But the thing that's really elegant about Kubernetes, isn't just about like, Oh, I can make placement decisions about which machine instances.

[00:17:59] Kubernetes has [00:18:00] created a controller pattern. You know, Brendan burns. One of the founders of the projects is creative genius. As far as I'm concerned, he was also a robotics professor before this. And so he was all geeking out about kind of control theory. And that certainly is a key element of what Kubernetes is, is it's not just about containers and deployment.

[00:18:19] It's about creating an appropriate set of control loops so that what's within the boundaries of that controller. Can be managed at atomically by the controller. So it's not just about getting their application out there. It's about the care and feeding. If something goes wrong, the restart dynamics they'd be the ability to deterministically, update it, making informed decisions about scalability in those, in those locations without access to.

[00:18:42] A centralized control plan, perhaps in some situations now certainly work with a lot of organizations where some of those environments are Periscoping and their behavior. They might be LTE connected and there might be a network outage, or they might be on a cruise ship that sails out to sea every X months.

[00:18:55] And it's just not connected at all. And Kubernetes lends itself so well to that because it's not just [00:19:00] about. You know, delivering static technology is about delivering welfare through controllers that can deal with a lot of those parameters. If you can, you know, if you can think about it, you can program it and you can present to communities, not just at the infrastructure level, but increasingly at the application of it as well.

[00:19:15] Um, so I think that's going to be incredible.

[00:19:18] Matt: [00:19:18] So when you think about the edge, you know, there are some pretty fundamental. Changes that happen at the access network, that last mile, so to speak. Um, you know, a lot of times the ownership of the server transfers from a cloud company to an enterprise, you know, how do you, how do you view that, that transition the edge of last mile Mark network?

[00:19:36] Do you see that getting blurrier? Do you, do you imagine, uh, you know, cloud workloads being spawned on. Private equipment. I mean, how do you, how do you view that?

[00:19:45] Craig: [00:19:45] That's a fascinating topic of discussion. And like, if I, I think anyone who tells you, they know exactly how it's going to go is probably signed something.

[00:19:52] I think there's still a lot of figuring out to do, but yeah, I'd say there are a couple of trends. We will see the cloud providers come with DP, [00:20:00] vertically integrated capabilities that are being rendered out into, uh, into those environments. And they have some very strong assets at their disposal. Um, I think we will certainly see some amazing opportunities for.

[00:20:12] Organizations to create, um, kind of multi-tenant based outcomes in those types of destinations. So independently of who owns a physical piece of serving gear. There's no reason why you couldn't create an API economy or a. An edge function economy that can leverage out, you know, whoever put that piece of infrastructure up there, as long as you can normalize that infrastructure, as long as you can set up the tenancy model and the isolation boundaries sufficiently you're in a situation where this like whole new economies will likely emerge around this.

[00:20:44] And I think that that landscape will be quite fluid for some time, but yeah. Yeah. It's, it's certainly, uh, it's a fascinating dynamic exactly. As you said, as you. Yeah, there's these sort of classic boundaries of, of ownership. That's very much in flux at the time.

[00:20:56] Matt: [00:20:56] You know, you can run Kubernetes on a, on a raspberry PI on a [00:21:00] device that's actually in the field.

[00:21:02] And I think it's very different than running Kubernetes on a, you know, on a server that happens to be at the base of a cell tower in a carrier hotel or in a regional data center. But I think both are reasonable and applicable.

[00:21:14] Craig: [00:21:14] Yeah, I do think so. And it's interesting because there's a, there's a decent analog here to Linux.

[00:21:18] If you think about. The form factors that Lennox is deployed into next is deployment everything from cell phone sized devices to mainframes and everything in between. And if you think about what Kubernetes is emerging is it's effectively a, it's a way to program distributed systems patents. It's a way to kind of deliver a distributed systems patterns.

[00:21:39] And, um, and I think the, the analog is pretty clear, you know, I think Coobernetti's compliments Lennox. I'm not sufficiently arrogant to assume that it will have the duty cycle that the has had. I hope it does. I think it likely will, but we still have work to do as a community to make that true. But the, the potential of this sort of the versatility of that [00:22:00] model I think is, is quite strong.

[00:22:01] And, uh, we're certainly heading in a positive direction with the technology. If you could

[00:22:05] Matt: [00:22:05] provide the audience of developers some direction on where you would like to see the community invest in Kubernetes to explore some of the new opportunities at edge, what would you, what would you advise them to do?

[00:22:20] Craig: [00:22:20] You know, it's interesting. I was thinking a little bit about this before we came on. I think there's, there's obviously normalizing a variety of different form factors, making sure that we have effective conformance standards around a variety of different form factors, effectively making sure that we get those profiles in place so that, you know, if you're building an application for a certain class of deployment, There's a well-qualified profile because you know, when you're running something on raspberry PI, it's not going to feel like running something in your mainstream data centers.

[00:22:48] So I think that's certainly an area that hooves us to, to emphasize and focus on. Interestingly, I've been thinking recently about, um, the intersection of where the Sandy and some of these [00:23:00] technologies, and there's an interesting little project from Microsoft called crusted, but. Well, we start looking at the class and shape and nature of things like edge functions, the ability to write a relatively small sliver of code that can run in a massively multi-tenant context, uh, have very high levels of security isolation.

[00:23:18] It really starts to feel a lot like web assembly. And so I'd love to see things like the WebAssembly, um, systems, interfaces, protocols solidify. I think that could become quite interesting in this world. And it's obviously very early days, but, um, That's something I'd love to see us, uh, us think about, you know, and then like really tooling up the, um, the developer experience around some of those pieces would be quite interesting.

[00:23:41] Just enabling folks to start thinking about building, you know, building applications for these, these types of deployments, where various pieces are homed in a variety of different destinations, depending on the like sort of cost economics of, of where something is run is, is going to be quite interesting.

[00:23:56] And again, it's all going to come back to control plan. You're not going to be a cool vendor. If you don't [00:24:00] have a. And control plane that enables you to deliver outcomes into these types of destinations. Um, at the end of the day, relatively few organizations are going to have the capabilities to operate effectively a full on SAS service to, to, you know, think through the mechanics of how these types of applications are built and delivered and managed and updated and, and observed.

[00:24:24] And that's gonna be interesting. And then the final piece, I think that's going to be quite interesting. And this is something I don't think the community is focused quite enough on yet, but it's certainly an area that we're focused on with VMware, which is observability becomes really interesting when you're dealing with something of this really important, like.

[00:24:43] What does it look like? You know, what does APM look like for an edge based solution where you have a pretty fragmented or pretty hierarchical topology? How do you reason about a metric system? That's thinking about how do you make that sufficient, the hierarchical so that you don't overwhelm. [00:25:00] The network links with metrics, but you're able to retain what you need to retain from a local deployment perspective.

[00:25:04] So that's going to be an area of certainly significant emphasis for us. And I think an area that the community would do well to pay attention to

[00:25:12] Matt: [00:25:12] telemetry, the importance of telemetry to edge computing has been underappreciated, and it's not just telemetry come off and coming off the applications is telemetry coming off the network.

[00:25:22] It's kind of what you mean coming off of the operational technology. I mean, if one of your. Micro data centers out at the edge goes on to battery power. You probably want to make some intelligent decisions about starting up backup, workload somewhere and starting to low balance traffic around. And right now getting that information is very hard because it's not something that it's, it's probably buried behind some canvas interface that you don't have access to as a software developer.

[00:25:46] And even you did, you might not know what to do with it. So the telemetry piece, I think you're right. That's a really, really underappreciated from a use case perspective. You know, the autonomous car example is the one that wants to be everybody's favorite. Although I think people that really think [00:26:00] about it, don't expect the infrastructure to be the driving factor for autonomous vehicles, maybe for coordinating them, but not, not that thing.

[00:26:06] But if you look like over the next, let's say 18 months, what, what are the use cases that you think are realistically going to emerge that excite you

[00:26:15] Craig: [00:26:15] in edge computing? The two areas where I see the most kind of, um, energy around are in the retail and manufacturing segments. Um, look, pretty much every segment you can think of is chewing on the edge.

[00:26:28] Problem. Retail in particular is an area where I'm seeing things moving very, very quickly. Now this, the, the global pandemic has forced a lot of retailers to. Take a long hard look at how they do business, how they operate and you know, the, the old narrative around necessity and invention, we've seen so much traction there.

[00:26:47] So getting to a point where we have far more computational resource in, in a form factor that is appropriate for the destination that is sufficiently [00:27:00] operable, that you can roll it out to a pretty broad array of sites to unlock everything from, you know, relatively simple. You know, experiences in small, you know, branch locations or small retail locations to running, you know, really high end inferencing workloads and creating these next generation shopping experiences is interesting.

[00:27:20] There's that segment in particular is moving very quickly. Um, manufacturing, I think over slightly longer horizon is becoming in a very, very focused on. Getting better competitional resource and you know, a better control of the competition resource. But the problem with a lot of manufacturing is that that's generally tied to a pretty high CapEx moments.

[00:27:38] And it's a, it's a sort of, much more. Kind of complex roll up

[00:27:42] Matt: [00:27:42] of the, you mentioned COVID 19. And you know, my experience is that every manufacturer had a ten-year automation plan. That's now a four year automation plan because of COVID because nobody wants to be stuck in that position again, where they're shutting down the lines because they can't get people close enough.

[00:27:56] Yeah. Really, really interesting times. So, Craig, [00:28:00] what do you think the biggest challenges are to adoption of edge devices and edge technology?

[00:28:05] Craig: [00:28:05] Yeah, it always comes back to the people skills. You know, the technology, I think, is emerging. Being able to identify, you know, just given the noise, being able to distill a signal from the noise and, uh, you know, make smart bets, you know, getting the skillset in place that's necessary to, you know, start running a series of experiments, you know, navigating so that you're not making these kind of huge CapEx, heavy investments.

[00:28:27] As you can actually start to build into what you're doing. And build your organizational skills at the same rate, as you're starting to engage in deploying these technologies is, is key a control plane. You have to have a sort of a hierarchical, highly available control plan to support this that's necessary because.

[00:28:46] It's not just about, you know, I think a lot of folks emphasize, well, what does it take to deploy a Coobernetti's to this destination? Well, it's, that's one thing updating it that's another, but then like how do I deploy an application into 10,000 retail [00:29:00] locations? How do I run an experiment in two of those and get the results of that experiment?

[00:29:04] How to make an informed decision when I'm ready to deploy that update out to the other 9,998 locations. If you're. Running a sufficiently large operation. And, you know, without that control plane technology getting really buttoned down and being able to reason about the control plane is something that spans both the infrastructure, but also the supply chain that renders those application capabilities, components, experiences is going to be key.

[00:29:28] And I'd say the second area, and this is, you know, as we started looking at the energy and impetus to start to have more multi-tenant edge based facilities is going to be a challenge. Um, you know, Kubernetes itself was certainly never built as a kind of natively multitenant environment. So being able to have better lines of isolation, security attendancy model, that's smart for much lighter weight kind of function like use cases.

[00:29:56] It's going to be a big, a big challenge for this industry.

[00:29:59] Matt: [00:29:59] It sounds like a lot of [00:30:00] exciting work to get done. Craig, thank you. Thank you for joining us today. How can people find you online at lime and learn, learn more about

[00:30:07] Craig: [00:30:07] your work? Well, you can always ask me that's CMC, L U C K on Twitter. And then I occasionally post blogs on medium.

[00:30:15] If you look at my name, Craig plucky. Yep. I'd love to hear from folks. And, uh, thank you so much for having me on. It's been a lot of fun chatting to you, Matt. Yeah. Thanks

[00:30:22] Matt: [00:30:22] a lot, Craig. Really appreciate it.