Over The Edge

Laying the Railroads for an Open Grid with Vishwamitra Nandlall, VP Technology Strategy & Ecosystems at Dell Technologies

Episode Summary

This episode of Over the Edge features an interview between Matt Trifiro and Vishwamitra Nandlall, VP Technology Strategy & Ecosystems at Dell Technologies. Vish is an experienced CTO and a highly regarded telecom visionary. He is responsible for defining Dell’s technology strategy in the Big 6 domains, including 5G, Edge, Data Management, Cloud, AI and Security. In this episode, Vish talks about laying the railroads for telecommunications, discussing the development of personal data appliances and smartphones, privatization and creation of the internet, and the role Dell plays in the future of technology.

Episode Notes

This part 1 of 2 episodes of Over the Edge features an interview between Matt Trifiro and Vishwamitra Nandlall, VP Technology Strategy & Ecosystems at Dell Technologies. Vish is an experienced CTO and a highly regarded telecom visionary. He is responsible for defining Dell’s technology strategy in the Big 6 domains, including 5G, Edge, Data Management, Cloud, AI and Security. Widely recognized for his contributions to the industry, Vish has also held CTO executive leadership roles in Telecommunications for 25 years, including Telstra, Ericsson, Extreme and Nortel. Vish has been awarded a fundamental patent for LTE, published several widely cited technology papers, and holds several patents for the design of cloud based mobile applications and communication services.

In this episode, Vish talks about laying the railroads for telecommunications. He discusses his career path and the work he did developing personal data appliances and smartphones. Vish goes over development and privatization of the internet. He explains how artificial intelligence has become a huge part of Dell products. And, delves into details about the build out of infrastructure, connectivity, and bandwidth.

---------

Key Quotes:

“We've moved towards this internet where we've centralized compute that creates pooling efficiencies, but it comes at the expense of propagation delay. But, largely what fueled the internet was the growth of a digital economy that was self-contained. It didn't have a lot of tendrils out into the real world. And, I think what Edge does is it starts to put up a series of sensors into the real world and it bridges this real world with this cyber world.”

---------

Show Timestamps:  

(02:15) Getting Started in Technology

(05:15) Path to Telecommunications

(06:25) State of Advanced Technology in 1995

(10:15) Switch Networks

(15:15) Development of Internet Content Delivery

(23:15) Networking Standards and Internet Organization

(25:45) Path to Dell

(20:45) Difference between Propagation and Transmission

(32:45) Path to Dell Continued

(38:15) Software vs. Hardware

(39:15) Dell’s Role in Telecommunications

--------

Sponsor:

Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting DellTechnologies.com/SimplifyYourEdge for more information or click on the link in the show notes.

--------

Links:

Connect with Matt on LinkedIn

Connect with Vishwamitra on LinkedIn

www.CaspianStudios.com

Episode Transcription

[00:00:00] Narrator 1: This part 1 of 2 episodes of Over the Edge features an interview between Matt Trifiro and Vishwamitra Nandlall, VP Technology Strategy & Ecosystems at Dell Technologies. Vish is an experienced CTO and a highly regarded telecom visionary. He is responsible for defining Dell’s technology strategy in the Big 6 domains, including 5G, Edge, Data Management, Cloud, AI and Security. Widely recognized for his contributions to the industry, Vish has also held CTO executive leadership roles in Telecommunications for 25 years, including Telstra, Ericsson, Extreme and Nortel. Vish has been awarded a fundamental patent for LTE, published several widely cited technology papers, and holds several patents for the design of cloud based mobile applications and communication services.

In this episode, Vish talks about laying the railroads for telecommunications. He discusses his career path and the work he did developing personal data appliances and smartphones. Vish goes over development and privatization of the internet. He explains how artificial intelligence has become a huge part of Dell products. And, delves into details about the build out of infrastructure, connectivity, and bandwidth. 

But before we get into it, here’s a brief word from our sponsors…

[00:01:30] Narrator 2: Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting Dell.com for more information or click on the link in the show notes.

[00:01:51] Narrator 1: And now, please enjoy this interview between Matt Trifiro and Vishwamitra Nandlall, VP Technology Strategy & Ecosystems at Dell Technologies.

[00:02:00] Matt Trifiro: Hey Vish. How you doing today? 

[00:02:03] Vishwamitra Nandlall: Not bad yourself, Matt. Great to see you. 

[00:02:07] Matt Trifiro: Yeah, likewise, you know, one of the things that I always like to ask people outta the shoot is how did you get interested in technology?

I grew 

[00:02:14] Vishwamitra Nandlall: up on the east coast of Canada in a province of the new Brunswick, which is one of the Atlantic provinces and, you know, kind of a, had a really interesting upbringing, but it was in the shadow of this massive. University called the university of new Brunswick. It's one of the Canada's oldest university campuses.

And I lived behind the engineering building for about 10 years of my life. And I would, I guess, through osmosis, just get bound up. And I intertwined with a lot of different technology discussions with professors, with students 

[00:02:45] Matt Trifiro: as a, as a, as a 

[00:02:46] Vishwamitra Nandlall: child as yeah. Quite quite young. Oh yeah. Interesting. Yeah. So to me, I think a whole thought.

Getting into engineering was locked in at a, at a pretty early age. I mean, you're, you're about the same age as me. I think Matt. So a lot of this happened during the, the house in days of, you know, the personal computer coming up and this whole notion of I can kind of affect massive change through the power of raw compute.

What, what was your 

[00:03:11] Matt Trifiro: first personal. 

[00:03:13] Vishwamitra Nandlall: First personal computer, would've been probably a com or 64, which was, uh, kind of one of those [00:03:20] fascinating introductions to the world of, you know, digital. Do you remember, remember 

[00:03:22] Matt Trifiro: geos the graphical environment offering system? Gosh, yes, I do remember that. Yeah. So I worked for Berkeley softwares and I.

At the time I was running the documentation department. So you probably read my manuals uh, at one point geos was the largest installed based operating system in the world because it shifted with every single ER 64. So it had millions of units, which is nothing today, but that was really kind of at one point it was the, the number one installed based operating system.

[00:03:48] Vishwamitra Nandlall: Right. That is hilarious. Yeah. That, I mean, that's the interesting thing about early days of any kind of technology diffusion is that you get to be the first and the pioneer of. And subsequently get replaced by the, the folks who come in the, the echo of that technology. And that's, that's always fascinating to me how that works out now, was your 

[00:04:07] Matt Trifiro: interest primarily on the hardware side or the software side, or obviously you got a lot of telecom in your background.

Was it like RF engineering? Like what was your primary engineering interest? 

[00:04:17] Vishwamitra Nandlall: It was always on the hardware side, I guess this is prior to the software. Boom. We were still, and the way I like to look at it is we were laying the railroads for the industry was the hardware and the software came and took advantage of all these deployed capabilities that we had designed.

So back in my day, what we called. Hardware was probably decidedly different than what people call hardware. Today. We, we would do board design. We would develop the Asics and the F PGAs we'd usually develop our own operating systems. As, as you've, you've probably had a chance to do in your life, you know, would develop most of the firmware.

And then that, that was kind of put on a silver plotter for software developers to come in and try to bus apart. [00:05:00] It was a different discipline. Definitely back then. How'd 

[00:05:02] Matt Trifiro: you end up in telecom specifically because you, you did a door through Erickson as the CTO. I understand. How did you end up in, in 

[00:05:08] Vishwamitra Nandlall: telecom?

Back when I started into the workforce, would've been 19 94, 19 95, the place where every good Canadian engineer went was bill Northern research. That was kind of the, the beacon for most people's career, but went electrical engineering. That, that was exactly the path I followed. I got my iron ring and then I made a B align to, to auto Ontario and joined the advanced technology team at Northern research, which eventually became Nortel.

And obviously a lot of people fairly familiar with the story of Nortel. So, so 

[00:05:40] Matt Trifiro: back in the nineties, what was advanced technology from a telecom stand? Because like, so, so 95 is when, like the browser. Landed on my radar. And yeah, I remember telecom. If I recall correctly, it was like packet switched over my dead body, but I don't know for sure.

So tell, tell me what advanced technology looked like back in 95. 

[00:06:01] Vishwamitra Nandlall: Well, there, there was a number of things that we were trying to solve. Believe it or not Nortel was in the middle of something called visit, which was this basically a teleconferencing system where you could much like how we enjoy zoom.

Podcasts and conference discussions over the pandemic. Back in 1995, there was a real view that maybe we could have video television type, you know, kind of a 

[00:06:23] Matt Trifiro: ATT showed the first video phone operating video phone of the 1964. World's fair took only took us around 30 more years before. 

[00:06:31] Vishwamitra Nandlall: So I, I worked on some of those early systems.

We did prototype systems. The headline for, for Nortel was something called visit. Also worked on [00:06:40] some of the early smartphones, I guess we wouldn't have called them smartphones back, back then there were personal data appliances, and we had a, probably one of the, the first to hit the market. And in fact, didn't really hit the market.

It was something that was completely developed and then put into a, you know, some kind of dark dusty warehouse, probably right next door to the arc of the covenant. And that particular device was something that could do email. You could have voice conversations on it. You could take notes. We had little games and stuff for, you know, downtime, distractors, and all sorts of interesting things, but ultimately Nortel decided not to commercialize it.

Well, I mean, 

[00:07:16] Matt Trifiro: Canada is the origination point of what I think the. First credible smartphone with research in most, or most people know is Blackberry, right. Which preceded the sort of iPhone by years. And for a good stretch of time was probably the most advanced smartphone. I still miss that keyboard actually.

[00:07:34] Vishwamitra Nandlall: yeah, there was something compelling about it, but absolutely. I mean, those were the types of things that we'd used as advanced technology. The. In fact that bled into one of the first full R and D projects, cuz it, most of my career, I, I was a researcher in the early days and then went into pure R and D we had decided to develop effectively, just a, a, a share, nothing processing platform that could scale out for voice communications.

What does that mean? Share nothing. Meaning that effectively, each of these processing elements had memory and context that didn't have to be distributed to any types of peer processing elements in order to scale, they could each be deployed individually. And as you individually deploy new processing elements, the [00:08:20] system scaled and processing power.

Is it like, 

[00:08:22] Matt Trifiro: I'm sorry, I'm going back to telco. Is it like line cards and dial tone scaling or was it no, no, this, 

[00:08:27] Vishwamitra Nandlall: this was pure processing for voice switching. So as a voice call came in, you terminated it. You connected it to a different appliance and pulled the, the context for that subscriber and then rang it and completed the session.

So all the control plane processing had to occur in some kind of was this. So this was digital 

[00:08:44] Matt Trifiro: switching, basically still. This was still digital switching. Okay. And it wasn't wasn't IP based. It was actually a switch network. 

[00:08:50] Vishwamitra Nandlall: It absolutely was. Yeah. Yeah. Super interesting. We were running against an architecture at Nortel called DMS, which was digital multiplexing switching, which was Nortel was one of the first companies to come out with digital switching, which kind of made their name in telephony.

and the evolution of that was, well, why do I scale the compute on that so that we can get to multimillions of busy hour call attempts? Mm-hmm , which, you know, at the time was really exciting to me. I'm doing multi-processing systems, you know, the downfall of working in an architecture that's largely opaque to the general public is that it's very difficult to get any kind of attaboys from your parents about what you're doing.

They would literally just say, oh, you. You can do the same thing that you could do 10 years ago, except more of it. My 

[00:09:33] Matt Trifiro: phone still 

[00:09:34] Vishwamitra Nandlall: works. well, that's really fantastic. Yeah, it really made me prepare for my whole career in that I was never, ever gonna get recognized as someone who it was doing something that was of interest to the general population.

Yeah. 

[00:09:47] Matt Trifiro: Now, now for those in the audience that, that maybe got a little lost with this, this inside baseball terminology, I think it's really interesting. So could you describe to the audience what a circuit switch network is and what a pack of switch network [00:10:00] is and like how that relates to everything today?

[00:10:04] Vishwamitra Nandlall: Yeah, how to back into that kind of discussion. So in the early days of telecom, the, the primary information that we were transacting in was voice packets was, you know, encoding voice information. And the way we would do that was to use a synchronous network. So today networks are, are largely asynchronous back then there were deterministic slot times in which you can transmit some information signals.

And each of those information signals were of identical length. So it was largely uniform in terms of what you were switching. And as a result, very predictable, as long as it was switching fast enough, I could hold a number of different conversations simultaneously on the same link. The interesting thing about that was if people are paus.

there's a bunch of slots that are effectively empty because there's no information to be carried. And so it was that recognition that there's opportunities to put another information where we could statistically multiplex and that there were gains to be had in being able to do that. And then we also recognize that there were some efficiencies for different applications where you could add different sized information packets being transmitted.

When your only application has voice, you can standardize on one thing. It's one size fits all. If you're transmitting a massive document. Then kind of having it split up into small little chunks is very inefficient as opposed to just sending it in a big packet for instance. And so this notion of let's have many different size packets and be able to statistically multiplex things.

And [00:11:40] instead of requiring time synchronization everywhere, we'll have effectively an asynchronous system. All those properties kind of started to coalesce together to form this new. Error of pocket switch networks. And that's where IP came in. And the idea was instead of having something that was pristine and deterministic, we'd have something that's the best effort and we'd understand how to compensate for errors and other things using transport protocols and re transmission.

And, and that we felt that this could be deployed much more quickly and could accommodate much more traffic. And at the end of the day become more economical to deploy and all those things prove to be true. Of course. I mean, today we rarely see time division multiplexing as the principle for, for switching across the internet.

All it still does exist. There, there it is there in pockets. Well, you know, 

[00:12:33] Matt Trifiro: everything all is, is new again. Cause when you think about voice, it's kind of the, the original time series data, in some ways, it really does. And. When you intuitively think of, we talk about the internet as pipes. I mean, intuitively think of pipes, you think of putting something in, in one end and putting the next thing behind it, the next thing behind it, and it all sort of forces itself in the same order that you put it in out the other end.

And that's how the original telephone networks worked. I mean, when there were analog networks, there were like literally physical switches that would connect to circuits together. And you would literally be connecting from Los Angeles to New York through a physical circuit. And it. Send the, the, well, I mean, at some point it was actually data, but it was sended this in this, this sequenced thing.

So everything that was the first word that came outta my mouth was the first word that arrived at the other side. And [00:13:20] IP IP networking is, is. Very counterintuitive in some ways it's like, okay, look, we're not gonna send this in order and we're not gonna send it anything over the same path. We're just gonna kind throw it at the internet, let the internet figure out how to get it or let my network figure out how to get it to the side.

And then we'll just, we'll sort it out and reassemble it on the other end, which is miraculous. And one of the downsides that I think you, you pointed to is you, as you start moving to a best effort, you lose some of the. And the word you used, I think was discreet. Was it discreet timing? Was that the word that you used?

I think I was saying 

[00:13:50] Vishwamitra Nandlall: determinism 

[00:13:51] Matt Trifiro: determinism deterministic, right? Yes. Okay. That was, that was the word that I, that I meant to use. Yeah. Deterministic where you say, okay, this thing that I'm putting in on the network is gonna arrive at the other end at precisely this time. And the other end knows that.

And when you look at sort of modern applications that are. Attempted to be delivered over IP networks. You are looking a lot more applications that have those determinist qualities. I mean, let's say like cloud robotics, you're just trying to move the robotic length a 10th of a millimeter. And, but you need to move it at precisely the right time or within a certain tolerance.

And it, it doesn't seem to me that best effort is what you want there. So how, how are we reconciling these two worlds now, which is like this incredible. Foundational technology of T P I P, and packet switch and all these protocols, which have all these best effort, characteristics. And yet we have things that have very deterministic needs, like a 5g radio access network, or a robotic LA, how are we squaring that circle?

Yeah, it's 

[00:14:53] Vishwamitra Nandlall: interesting because, you know, I think if you go into the, if you go into the way back machine and, and think of [00:15:00] what the model of the internet looked like in those early days, Let let's say 1995. It's probably what most people think the internet still looks like it, it had effectively a, a bunch of computers that were collected across different houses and enterprise.

Those were connected through eyeball networks, consumer networks at, at T and other companies ran. You had transit networks where those eyeball networks were then connected into massive backbones. And those were connected and built by companies like at and T all the big telecoms had these backbones and, and the way the economics worked was all the money kind of flowed.

And all the pain flowed down. That's a great way of describing it. that was kind of the model of the early internet and it was massively best effort. I think we were in the early days of the internet, wasn't really critical infrastructure. It was, I'm just glad it worked. Um, you know, back then over time, I think to, to the point, we started to get a little bit more demanding of the types of services that we wanted from this infrastructure.

and, you know, over time we started to see data centers fall into the, to the heart of the internet. It shifted the critical attention away from the business center, which used to be backbone into these hyperscale data centers that started to grow. And that started to house all the content. And that colocation of content started to allow some level of service guarantees to be driven and, and, you know, a number of different kind of technologies enabled that cloud computing being probably the most prolific of them.

I think what happened as we moved away from [00:16:40] just the internet was a series of webpages and the internet as something that can distribute content. We saw another massive change, which started to address this issue of latency. And that was content delivery networks. And you had famously Akamai coming in.

These guys created less of a pattern where they're pooling a bunch of servers. They really started to embrace much more of a distributed. Type of deployment, where they would have content Cass, you know, near consumer networks or closer to the end user so that, you know, when you pull content, it'll be served locally and you got, got it through the magic of, I mean, in many 

[00:17:20] Matt Trifiro: senses, that was the original edge computing.

In fact, the earliest reference I can find to the phrase is in the, the scientific paper that the AIMA founders published on 

[00:17:28] Vishwamitra Nandlall: how the networks certainly. But I, I think those things started to tease out how we could deal with things that were latency sensitive, particularly video. And in fact, today, you know, I would argue that most content that consumers access is on net.

It's actually not going through an internet and it's going through maybe the first eight hops to your first point of presence. And most of your content is served from. Which really started to in my mind break, break up some of the semantics of the internet. So we had this hierarchical thing. Now you have something that instead of hierarchical is really broadly, you know, something that's densely connected that is flatter than it used to be much flatter than it used to be.

That you have these massive hyperscalers who have consumed transit. So transit almost doesn't exist anymore. In terms of the diversity of where content is. It used to be [00:18:20] that maybe a thousand, 1500 different ASNs constituted 50% of the traffic today. It's probably closer to 10 and we all know who they are.

It's like Netflix, Amazon it's been massively consolidated. And so the heart of the internet has become very opaque. And is managed almost through a whole set of proprietary protocols and certainly 

[00:18:41] Matt Trifiro: on proprietary networks and on proprietary network. Yeah. I think the current estimate is, is as much as 70% of the traffic on the quote internet, as we think of it, it collectively both private and public is on private networks.

It's on the backbones of Google and Facebook and it absolutely does on, yeah. And 

[00:18:57] Vishwamitra Nandlall: when I see proprietary, I, I, maybe I'm being a little fast and loose with, but it, it comes down to, I remember. In the public internet, experimenting with TCP, we had different types of algorithms. Like, you know, you had Reno, you had Tahoe, you had Vegas, you had all these different types of methods to do congestion control on the internet over TCP.

And it took a long time for any one of them to get adopted and diffused. But now you've got Google and Facebook coming up with, you know, BBR or quick or whatever they get distributed through the internet very quickly, largely because they own so much of the internet that they just use it as the, the transport element of choice, a transport protocol of choice in their networks.

That, that to me is, you know, a few companies who are able to hold sway. Over what are the transport protocols that get picked up? And as a, as a result, get diffused very quickly and impact consumer's lives. That didn't happen before. It used to be governed by a set of internet architects and the IAV, and I ETF, who, who went through very [00:20:00] slow processes.

And then it had to get adopted commercially. And you had this, this, this two speed kind of organization on the commercial and on the standards making side that was driving protocols and creating efficiencies in the internet today. , you know, it's all, all been really taken over by monopoly sway, right?

It's yeah, we're gonna do this. It reminds me of the early days of Qualcomm, when we were doing three G P two, where we had a specific cellular protocol in, uh, north American, then you had three GPP, which was the, the European GSMA governed protocols. And you know, the difference between those standards bodies meetings is you'd go to three G P P two, and Qualcomm would come in with suspect and say, This is what we're gonna do, and everybody put their hand up and voted on it and that was done.

And then you go to the, this other three GPP standards body in France, and it would be years of negotiation before they decided to do the next thing. And so the, the speed was, was quite a bit different under authoritarian kind of dominion versus, you know, something that's a little bit more egalitarian.

Um, and so I think, you know, we're seeing that same kind of. Kind of govern how the architecture of the Internet's being kind of ruled. Yeah. 

[00:21:12] Matt Trifiro: That's, that's a really interesting perspective, you know, there's this great phrase in open source software, cuz a lot of open source projects that in the regional days were the benign benign dictator, right?

With benevolent dictator where you had one person who was controlling everything. And I guess leus Tova, this is probably the, the patrons saying of that, but because you've got this like limited control set of control points, Don't tend to move as quickly as you might wanna have them. And so in these larger open source projects that are more distributed, it's, it's a very different animal.

And [00:21:40] like the phrase is code wins, which is stop talking about it. Just, you know, just, just it, give it yeah. Sending your poll request. And if the code looks good, then let's, let's ship it and we'll roll it back. If it breaks. 

[00:21:51] Vishwamitra Nandlall: Um, getting back to your original question of, you know, how did we reconcile TDM this packet?

The fact that so much stuff is opaque in the. We started to embed things and we broke this classic principle, the end to end principle by putting in, you know, middle boxes and things that would do load balancing and things that would do far walling and distributed denial of service protection and would do packet inspection.

And we put network address translation. We started to load up. Yeah, the internet was a bunch of things. A lot of it was to stop it from quaking in to start to put some enforcement in terms of how orderly these packets were getting delivered. And then you reach the apex of that. When we start to look at how do I bring timing back into the internet and you've got I E 1588, you've got a whole bunch of ethernet sync protocols that come in with the attitude of, we need to bring.

Some level of synchronization back to the internet to create more predictable streams of, of, of information. Do those, do 

[00:22:55] Matt Trifiro: those predictable networking standards to sort of give them gen do, do they, do they, do they tend to employ the same strategy? What are the strategies that are employed to sort of marry the best of best effort with the best of determinism?

[00:23:10] Vishwamitra Nandlall: Yeah. I mean, a lot of it comes down to how do I distribute. Across a network. And of course, when you grow up [00:23:20] designing things like sonnet or, or synchronous digital networks, you're very familiar with, okay, well now I need to have this, this quantum, you know, source of timing. And, you know, we evolve in the number of different ways to, to keep accurate timing, including GPS, but you look for some kind of, uh, a stable timing source that is authoritative.

Um, and universal and universal, and that then kind of drives a, kind of a, a hierarchy of endpoints that are in synchronous. That are synchronous relative to that timing source and that the different timing sources across the globe are period was the central timing source. And they're all synchronized.

[00:23:59] Matt Trifiro: So, so even if, even if your time series data arrives in the wrong order, it knows what time each little bit is supposed to be played or central late. Yeah. 

[00:24:07] Vishwamitra Nandlall: All timing between everything. And you can reorder things and becomes, like I said, at the end of the day, much more predictable and you get much less packet loss and all sorts of other generative benefits.

So as these things come. There is much more of an attention on time, you know, I think we're starting to recognize that time sensitivity. Can unlock a whole new class of applications that have value. And I think we're, we're seeing it from the perspective of particularly video conferencing. Cause we just lived through the pandemic that having some more predictable in the face of something that's very lossy is, is a good thing because it can create better, more intimate discussions.

But I think we're also seeing. The need, as you mentioned earlier, for very accurate types of control systems that are need for robotic control, those types of things are becoming more and more in demand as, as we [00:25:00] roboticized factories, as we put mechanized and, and Cobo like technologies into factories and, and into everyday spaces, they require finer grain control.

And that, that level. Our system needs something that's more reliable and more predictable. Yeah, 

[00:25:16] Matt Trifiro: really interesting. So let's, let's put a pin in that and I wanna come back to all that, but I didn't wanna lose track of where we started, which was sort of how you ended. To where you are, so, right. So you hardware engineer.

And then how did you get from there to, to where you are now, which is at Dell? 

[00:25:33] Vishwamitra Nandlall: You know, I like, as I said, I started off as a hardware engineer. I started to, uh, work on at Nortel or at bell Northern research. We called them Christmas projects where you basically got an assignment for the year where you would develop something really cool.

And then the VP would. At the end of the year and take a look at it and everybody would smile and say, okay, what are you doing next year? Some of them got productized. Some of them didn't, it was kind of a, an interesting day. It was an interesting era in corporate research where you had the luxury of yeah.

Very Xerox park, like yeah. Doing, doing things that were interesting. So a lot of my background and, and skills came out and were, were kind of grown during that period of about five years. The, the first project I took on was this, this extended architecture core, which was a, a multiprocessing system for doing processing.

You know, that that was one of the, probably the most risky project I've ever been involved with in that every technology in it was new. It was a, let's do a contactless back plane. So everything's inductively coupled. And I can just add in as many as I want and never have to [00:26:40] worry about capacit as skew on the, the back end.

Traces. So that was one of them. Let let's come up with a new shared memory system. Let's come up with a different processing architecture. We're moving away from standard X, 86 to risk processing. And we, we brought in the new Motorola architecture and IBM architectures for power PC, which formed kind of the heart.

Of our, our processing platform and we developed gigabit speed links. This was back in the nineties, which was pretty fantastical, quite, quite an 

[00:27:08] Matt Trifiro: accomplishment back then. So all 

[00:27:10] Vishwamitra Nandlall: sorts of new technologies got embedded and all the risk on those technologies compounded. It took about, I would say roughly five years to deliver that project and became very successful afterwards, an amazingly successful project.

But, um, it really gave me the battle scars of how do you. Manage development pro project. When you have so many random variables and how do you go about prototyping and de-risking that technology so that you can create an end product or an outcome that you know is executable. That was a, a great set of learnings.

Following that particular experience. Nortel went through the transition to, I. Forming what our CEO at the time called the right angle turn. And we made an acquisition of a large company bay networks. I got involved in the te routing craze. So Nortel had maybe several te routing platforms. I was doing something called the ter packet.

Core platform became very, very invested in quality of service, which was hard to describe quality of service to someone who's not necessarily a practitioner, but it has a long and sorted history and has roots in our discussion about. How do [00:28:20] you move from TDM to pocket and still preserve some level of determinism?

And this was, this was effectively the, the answer was, oh, well, what quality of service mechanisms in? And there was this fairly lengthy debate in standards around something called in serve. Where you basically had an end to end path where you were exchanging context in terms of different types of admission control and scheduling criteria between the nodes in that path.

So that a packet could go through and get, you know, relatively the same treatment. And there were infinite knobs. That could be pulled and, and set in order to make this apply to a particular scenario, which we realized was just unmanageable. And so a lot of it got reduced to something called diff serve, where there was a hop by hop behavior that was put into place.

And those exist in most of the, the routers and switches that are deployed globally, but very rarely used because what people realized was if you start to have problems in terms of deliver. Of a pocket from one end to the next, whether it's jitter or latency. The best thing to do is just to put a faster pipe in place and the speed of those pipes being developed relative to the need of quality of service, which was really for throttled or constrained pipes was outpaced.

It was quicker to just put enough faster pipe than to learn the management skills needed to tune and optimize these knobs. So quality of service, you know, was, was something that was, I was. I was very invested in, I did a lot of work. Get you into it. 

[00:29:51] Matt Trifiro: How, how do you install a pipe? That's faster than the speed of light that's what I wanna know.

[00:29:58] Vishwamitra Nandlall: Well, that might be quantum entanglement. [00:30:00] I don't think we're there as yet, but you can do some interesting things to increase the speed. You can increase transmission speed. It's very difficult to increase propagation speed. You're limited by the physical medium, which is, uh, interesting. What's the difference between 

[00:30:13] Matt Trifiro: propagation and trans.

So 

[00:30:15] Vishwamitra Nandlall: propagation speed is governed by the distance over which you have to travel in a given medium. So on an optical path, it's close to the speed of light is what the propagation speed is. And that'll tell you what the delay across that link is a transmission speed is the time it takes a full packet. So it's all the overhead and the got it.

Move through. Yeah. And, and that's usually more of a function of channel bandwidth. Yeah. So the, there, there are suddenly two different things. The reason why that distinction's actually really important is it actually helps to shine a light on what are some of the limitations of edge computing in that edge?

Computing is, is largely a phenomenon of reducing propagation delay, but it's still limited by transmission delays. So if you wanna get an edge node deployed in a Metro site, that's a hop away from an end user you're closer. In terms of placing that compute closer to the user. But if all of your delay is actually in that last hop, let's say it's a, a wireless hop, right.

It doesn't matter. The transmission speed of that hop is actually a bottleneck, uh, in terms of delivery of the service. If it's, if it's compute, constrained, And then the same logic applies. It's not necessarily a propagation delay. It's actually the, the way I'm computing on the, on a particular workload. A good, good example would be like a cloud gaming or something like that, where doing, [00:31:40] doing all the computation to, to render a particular image typically takes longer than the actual propagation delay itself.

And so back in server architecture and the, the application architecture dominate in the equation in terms of where the delay is, Coming back to this notion of, you know, why those two things are important in edge computing. You can't just have an edge compute appliance. You also need to have a fast link or fast enough link somewhere around 200 megabits per second.

Yeah. 

[00:32:09] Matt Trifiro: Super, super interesting. So how'd you end up at Dell? 

[00:32:12] Vishwamitra Nandlall: So I go through Nortel. I, I go into optical after packet and most people go into optical end up going into wireless and that's exact same career trajectory I had. I was developing 10 gigabit per second systems and looking at these really interesting.

Alexander lasers in order to affect that transition. And a lot of those analog analytics technologies that you use apply equivalently to wireless and found myself at Nortel during the boom days of wireless. As we were moving from kind of one G to two G and two G to 3g and Nortel, you know, kind of propels me all the way up to the beginning of LTE before they go bankrupt.

At that point, I decided I don't want anything to do with telecom. It left a sour taste in my mouth, and I went to Santa CLA. To join a small ethernet switching company called extreme where I was their group CTO for for about a year. I got to work for mark kapa, who is CEO at the time. He is from sun fame where he, he led the storage division.

So I felt that there was a lot I could learn from him, but, uh, you know, it turns out just doing one technology when you're in telecom, you usually have your [00:33:20] fingers in a lot of different technologies is pretty boring. So I, I, I left extreme, went to Erics. Was there for the launch of 4g at, at and T it was the north American CTO and had a marketing and strategy.

So back into telecom, but not on R and D in the market, kind of understanding how the customers were reacting to all this technology that we had developed. So got a new unique perspective on, on wireless. I decided, uh, I I'd go into the operator side cause I wanna understand, well, how are people using this and what does it take to operate it?

Give another lens into telecom. And I was a group CTO at a, at a company called Telstra in Australia, and then finally decided really to, to complete my whole telecom journey. The, the thing I should do now is to form my own startup and run my own company. So I returned to north America from Australia was in Colorado.

I had a joint venture. With Lockheed Martin to develop stratospheric airships that could keep station and could ultimately be used by kind of a, you know, a low. Orbit satellite in this case, it's not, oring nears. It's 20 kilometers into the stratosphere, and that could provide macro area coverage for 5g using SP tennis that turned out to be pretty capital intensive, to build out.

A lot of people were interested in it. We did spin out some of the technologies to form kind of free space optics systems for back haul, but ultimately closing out that company opened a door into Dell, where they had just been. At the tail end of their, the merger with CMC, the CTO for Dell, John Rose was a mentor of mine [00:35:00] and he opened the door and said, Hey, we're looking at, you know, kind of reestablishing what our technology agenda is.

I need your help to come in and do that. And so that, that was kind of the journey to Dell. It was. And what's your, what's your role at Dell now? So my, my initial role was to really help to develop the strategy pillars for Dell in terms of their technology agenda. Which is what we call the big six at Dell.

It, it starts off as 5g, edge computing, basically cloud computing, or multi-cloud computing security, AI and data management. So tho those were the, the big six areas that we had identified as keys to Dell's future. Several of those have become business units or product units at. The more, most prominent of which is the telecom systems business unit, where we made an investment in the industry into 5g and into open ran architectures.

And we now have a, a data management business in our ISG business unit and an edge product unit within ISG as well. So three of them are kind of up and running. We're in the middle. Developing our security strategy into kind of a horizontal product strategy for the company. And then obviously the multi-cloud stuff is translated into our apex project.

And the only one that that's kind of been, maybe not addressed through as discrete an organization. Has been AI and AI as turns out, has infected pretty much all of Dell products, whether, whether it's, how do I do AI development with our storage systems? How do I do training? And Infor was there, our new compute systems and accelerators, all of those things kind of have really been part of our AI portfolio.

And [00:36:40] we've also kind of advanced a lot of things in terms of using AI in our Dell products for doing prediction and maintenance, those types of things. So, so a lot of the, the. Of that particular role, kind of a, you know, 

[00:36:53] Matt Trifiro: my, my image of Dell is still largely shaped by my experience in its early days. Sure.

As a PC clone maker. Right. And it as a, the. Faster, better, cheaper version of what I could get from IBM. Right. And what you've just described to me is a, a, a multi, vertical conglomerate in technology. So it sounds like a pretty profound shift in Dell and Dell's gone through different lives. Right? I mean, it just, it.

It acquired VMware in, you know, that's right. And then, and then it, it, it, it divested of VMware, which now be part, part of Broadcom and you probably will end up competing with them maybe, but it's essentially, so what, what percentage of Dell, it's just kinda random question. Do you, would you say is software versus hardware?

[00:37:35] Vishwamitra Nandlall: Most of Dell is what an engineer would say is. Um, most, most of Dell. Yeah. So, you know, whether it's software defined storage or whether it's the embedded code that we deploy in our servers or whether it's, 

[00:37:49] Matt Trifiro: that's so interesting, cuz I think of Dell as selling iron. 

[00:37:52] Vishwamitra Nandlall: Yeah. Now what, what I think, you know, distinguishes kind of this, this definition of, of when we say software and embedded software versus let's say kind of, um, Clyde facing applications.

Those are very different things. So much of Dell does not do web applications or, or isn't driving a, a 10 ton of user interfaces at the end of the day. We do do that, but that's the minority of the development, the majority of the [00:38:20] developments, an embedded compute. And so there's a lot of software code that goes behind the scenes in terms of enabling that.

[00:38:26] Matt Trifiro: Yeah, super interesting. Super interesting. And then this telecom business that you've started up, but when I think of telecom, I think of everything from. The processors that Qualcomm and Intel make to the base band units and antennas that the Arkins and NOIA make. Where, where does Dell's role in telecom begin and end?

Like how do you see yourself relating to the entire industry in terms of what you provide and what, who you compete with? 

[00:38:53] Vishwamitra Nandlall: So, telecom is a, a really interesting space in that it was dominated really for the. You know, 50, 60 years by serving services. And particularly if we look into the, to the wireless era, when let's say 2000, when 3g starts to take hold, it's the first time that it moves beyond voice and you start getting the ability to consume data, and it really creates this era.

What I'll call the pocket internet. People were able to look up web apps and run searches on their future phones, and they could still send texts and they can talk and communicate. So it's primarily communication medium, but now I have an opportunity to do internet browsing it with 4g. It shifts yet again, subtly in, in two dimensions.

One of which is now I'm, I'm getting something called a smart. Which is more than a feature phone. I'm not going through this proxy gateway to get into the internet. I'm able to access the internet through some revolutions that companies like apple introduce. And so now you've got this iconic iPhone [00:40:00] that comes out 2007 or so it's timed near the release of 4g, which comes out in 2010.

and people are starting to, to use many more applications on this iPhone. You know, those applications are mated with server backends, and you start to see this separation of content from carrier, which never had occurred before wireless connection was in fact, a voice connection you was in distinguishable between the two.

Now you've got. This, you know, bifurcation. And that, that fundamentally changes a lot of things in the internet. When we start to see this content economy start to take off, and 4g is really the tipping point where it's all about video and video starts to dominate. And so as 3g comes outta the portable internet era, you go into 4g it's video.

Suddenly 5g comes along and nice thing about port learnt and video is that you're catering. Consumer needs and demands. And at the same time, we were in the middle of really trying to diffuse that technology, for instance, in north America, across 300 million people or so in the us. And so even in 2010, there were people who didn't have smartphones.

So there was an opportunity for growth. You build out that network, more people were gonna be added in. As customers and these operators who are building the networks, we're gonna get the, the subscription revenues. And then they could upsell on data plans because they need more data. Now we're in an era of flat rate pricing.

The consumer markets are saturated. The opportunity for growth and 5g is a big question, mark, is it just gonna be, we're gonna get faster and it's completely [00:41:40] hygiene. And if you wanna be competitive as an operator, you're going have to upgrade to 5g to keep up. It seems. Not, not the right answer at the end of the day, it seems like we're making up an economy, but it turns out there's another market to focus on it's the enterprise market.

And of course, when you say enterprise, it's almost ubiquitous with Dell, Dell is serving the enterprise market, whether it's on the client side or it's on different infrastructure side, we have a massive field organization. We have intimacy with a lot of the. Enterprises that would need this type of connectivity service.

And so that, that kind of starts to ring the bell, the first bell for, for Dell getting involved in 5g and telecom infrastructure. The second bell that gets wrong is well 5g is recognizing that there isn't just one architecture there. Isn't just one way to implement 5g the way we've implemented. Telecom infrastructure has always been some proprietary implementation that a large network equipment provider puts out.

It might be standardized in terms of the cellular needs and specifications. So there's inter working, but fundamentally the way you built them were largely interoperable, not interoperable. So there's new set of. Implementation tenants come in and people start to talk about, well, let's disaggregate hardware from software.

Let's not make them tightly couple. 

[00:43:05] Matt Trifiro: Right. Let's virtualize our network functions. Let's let's virtualize open the ran interface. Let's yeah, 

[00:43:10] Vishwamitra Nandlall: let's, let's do all these different things. Let's separate user and control. Let's do all these classical things that we've learned from, in terms of how to scale it.

Infrastructure. We [00:43:20] started to apply that to it and suddenly cloud. Technologies are making their way into the telecom architecture carrier architecture. Yeah. And we start to, we start 

[00:43:30] Matt Trifiro: to see, and they're just an enterprise at some level. That's right. They probably buy lots of Dell stuff anyway. Exactly. 

[00:43:37] Vishwamitra Nandlall: So, so this, this movement towards cloud align technologies, this movement towards standardizing underlying performance layer and separating the, the application from, from the underlying compute opens the door for Dell.

So we saw architecturally. This is an industry that is, is basically coming to the Heartland of Dell's R and D engine. So those two things really made. Very interested to understand how we could position ourselves. What we saw was this emergence of a, of kind of a movement called open ran. And you you've got the telephony infrastructure project that was under Facebook.

You've got the Iran Alliance, which has a number of different telecom vendors and operators involved in it. Start to come out to start to specif. Okay. What does an open ran look like? And, and we're using as opposed to, you know, the old three GPP standards based mechanisms to advocate for architectures.

This was very much an open source project. And with that, we felt, you know, if it was going to take root, we were kind of gonna need to, to, to do the equivalent of what red hat did for Linux. Someone needs to productize and industrialize. The open source element so that we can create a marketplace and it needed to be a company that had significant scale that could impact the market.

All of which kind of [00:45:00] led us to believe that we would have the opportunity to be a critical part of that. And so we invested in really trying to create that marketplace and we felt that the one thing that was really needed was someone who could take all these components from different vendors. There's a lot of startups in this space.

There is a lot of, a lot of different, a lot of different contributors into the open ran ecosystem that needed someone who could take the mall and build an end end system, be the system integrator of record, and then deliver that as you a whole cloth back to the operator community. And so that that's where Dell kind of started to spend a lot of their resources and invest.

was the understanding that many of these companies are software based C. And would need a standardized performance layer based on Dell servers, which is obviously the way we're capturing value from this ecosystem 

[00:45:55] Narrator 2: that does it for this episode of over the edge. If you're enjoying the show, please leave a rating in a review and tell a friend over the edge is made possible through the generous sponsorship of our partners at Dell technologies.

Simplify your edge so you can generate more value, learn more by visiting dell.com.