Over The Edge

Using High Performance Computing to Solve Grand Challenges with Wolfgang Gentzsch, Co-Founder and President of UberCloud

Episode Summary

This episode features an interview between Matt Trifiro and Wolfgang Gentzsch, Co-Founder and President of UberCloud. Wolfgang is a passionate engineer, computer scientist, and entrepreneur with 30 years of experience working in engineering simulations, high-performance computing, scientific research, university teaching, and the software industry. In this episode, Wolfgang describes how advancements in connectivity and processing power can lead to revolutionary changes in everything from technology to healthcare. He also explains how his company is working to help democratize access to computing power in the cloud, which was previously too expensive or too complex for most organizations to use.

Episode Notes

This episode features an interview between Matt Trifiro and Wolfgang Gentzsch, Co-Founder and President of UberCloud. Wolfgang is a passionate engineer, computer scientist, and entrepreneur with 30 years of experience working in engineering simulations, high-performance computing, scientific research, university teaching, and the software industry, from hands-on practices to expert consulting to leadership positions. He is an entrepreneur with six successful startups in Germany and the US, in engineering, high-performance computing, and cloud. Wolfgang is a member of numerous conference program, steering, and organizing committees, with 50+ keynote speaker appointments.

In this episode, Wolfgang tells us about the early days of network computing and how the grid was the predecessor to the cloud. He describes how advancements in connectivity and processing power can lead to revolutionary changes in everything from technology to healthcare. Wolfgang also explains what he thinks edge computing is today, and how his company is working to help democratize access to computing power in the cloud that was previously too expensive or too complex for most organizations to use.

---------

Key Quotes:

“Definitely, the grid was the predecessor of the cloud. And that's why there is not a real huge difference in both. The cloud infrastructure was completely virtualized and therefore fully automated and now I use that word democratized because almost everybody was able to use cloud resources then; which you couldn't easily say about grid. The grid was really for specialists in research centers.” 

“You can innovate at your fingertips these days. You don't have to build, you know, 2, 3, 4, 5 models and crash them against the wall. Now you do it in the cloud, which might cost a thousand dollars or $5,000 even, but it's much, much, much, cheaper. So, there are tons of benefits these days when you move to the cloud.” 

“Now HPC is really in the hands of everybody. For engineers and scientists a few decades ago it was only given into the hands of specialists, and that door is open for so many new applications,  making any kind of research or products basically coming out much faster with exponential acceleration, which will continue tol help us to solve problems, real problems. It's I mean, like in healthcare, for example, or climate and weather forecast, and also new technologies like electrical cars, autonomous driving, and all that stuff. So, I mean it is successfully making our lives even more convenient, more comfortable, and also solving mankind problems which we are facing.”

---------

Show Timestamps:

(01:45) Getting involved in technology

(03:05) Difference between Scaler and Vector Computers  

(07:45) Conversion of Parallel Computing and the Internet 

(13:00) Network Computing and the Cloud  

(19:45) Convergence of Grid and Cloud Computing 

(23:45) High Performance Computing and Super Computing 

(28:15) Difference Between the Cloud and High Performance Computing

(30:45) Uber Cloud 

(39:45) Living Heart Valve Project 

(41:45) Uber Cloud Project Example

(46:45) Growth of High Performance Computing and the Edge 

(53:05) Future of the Cloud

(55:15) Is the Network or the Internet the Computer?

(60:30) What’s Exciting in the Future

--------

Sponsor:

Over the Edge is brought to you by Dell Technologies - unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting DellTechnologies.com/SimplifyYourEdge for more information or click on the link in the show notes.

--------

Links:

Connect with Matt on LinkedIn

Connect with Wolfgang on LinkedIn

www.CaspianStudios.com

Episode Transcription

[00:00:00] Narrator 1: Hello and welcome to Over the Edge.

Today’s episode features an interview between Matt Trifiro and Wolfgang Gentzsch, Co-Founder and President of UberCloud.

Wolfgang is a passionate engineer, computer scientist, and entrepreneur with 30 years of experience working in engineering simulations, high-performance computing, scientific research, university teaching, and the software industry. He is an entrepreneur with six successful startups in Germany and the US, in engineering, high-performance computing, and cloud. 

In this episode, Wolfgang tells us about the early days of network computing and how the grid was the predecessor to the cloud. He describes how advancements in connectivity and processing power can lead to revolutionary changes in everything from technology to healthcare. Wolfgang also explains what he thinks edge computing is today, and how his company is working to help democratize access to computing power in the cloud, which was previously too expensive or too complex for most organizations to use.

But before we get into it, here’s a brief word from our sponsors…

[00:01:16] Narrator 2: over the edge is brought to you by Dell technologies to unlock the potential of your infrastructure with edge solutions, from hardware and software, to data and operations across your entire multi-cloud environment. We're here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com for more information or click on the link in the show notes. 

[00:01:37] Narrator 1: 

And now, please enjoy this interview between Matt Trifiro and  Wolfgang Gentzsch, Co-Founder and President of UberCloud.

[00:01:46] Matt Trifiro: One of the things I love to ask my guests right off the bat is just like, how'd you get involved in technology.

[00:01:51] Wolfgang Gentzsch: Yeah. I mean someone with my age, it started early long time ago. I studied math and physics at the technical university of Ahan and technical university says it all. So that's all about technology. I did my PhD at the technical university in stra in Germany, in Mamusic in mathematics, applied already then to solve complex engineering simulation problems.

So very early, still during my studies, after the PhD, I was a researcher at the max UNG Institute in Munich, developing numerical algorithms for plasma physics, again, a big piece of technology there and on our last part, not least. And the next step getting even deeper into technology was when I became head of the computation fluid dynamics department at the DLR in Germany, in gutting.

And I usually tend to call it German NASA for those who. Don't know internationally, I mean, DLR. And so, yeah, my tech journey basically to summarize like a 30 plus technology living so far, my technology journey began with so-called scaler, ment architecture, computers, then moving over to vector computers.

I think everybody, almost everybody knows the Cray, the Cray one to parallel machines then like inter must bar lion, N cube, and yeah, some [00:03:20] remembers the connection machine and fi finally then great computing with basically widely distributed network computing resources. Yeah. So let's touch on some 

[00:03:28] Matt Trifiro: of those, cuz those are all really interesting.

So first of all, what's the difference between a scaler computer and a vector computer, 

[00:03:34] Wolfgang Gentzsch: a scaler computer handles every arithmetic operation in a scaler serial mode, one after one after one, 

[00:03:43] Matt Trifiro: which is how we tend to think of computers simp in, in, 

[00:03:46] Wolfgang Gentzsch: in simple terms. Yeah, yeah, yeah, yeah. You have exactly. You have one instruction, then you have the next instruction, et cetera.

And the vector computer bundles instructions into basically whenever the data has the form of a vector having. Like tens or hundreds of vector elements, then there is one operation, one operation vector plus vector equals vector or vector times scaler equals vector. So this is just one vector instruction.

And obviously these beasts are a hundred times faster than for Noman based computers and the next generation, then poly machines, by the way, they are another of 100 to 1000 times faster than even vector computers and so on. So these were those days, the big jumps, the big disruptions or advances then in computing and applying this to more and more complex applications.

Yeah. So, so 

[00:04:45] Matt Trifiro: I'm interested, you set up a comparison between these other elements and a V computer, but I, I tend to think of. All computers is ING computers. Am I, am I thinking about it incorrectly? What do you mean 

[00:04:57] Wolfgang Gentzsch: by that? Yeah, I mean, uh, ment [00:05:00] architecture was with a central processing unit for one operation fund floating point operations.

And obviously when you zoom into a vector computers instruction, then you recognize, yeah, there are similar scale operations, but on the vector element level. And when you zoom out, you see, Hey, these are 100 operations, which are all identical, but on different data. So, I mean, uh, it's certainly a difference.

Because it's much more clever to put the data streaming out the memory banks, so to speak than, uh, into the vector units, uh, again, 100 and even more times faster. How do you think 

[00:05:43] Matt Trifiro: of the advances back in those days compared to say adjacent processing capabilities like GPUs and even F PGAs, is there a relationship that you see or is it just different 

[00:05:55] Wolfgang Gentzsch: technologies?

No. No. I mean, that's a great point. We already then in the late seventies had SIM D machines. So a array of processors, which, uh, not just handled vector. But whole matrixes, I mean, many of the physical problems, which we approach by numerical algorithms turn into matrix times vector equals blah, blah, blah.

And, uh, NRA processor like the, the was a machine called DSP, for example, distributed processing array, something, or is already for good. That was in the late seventies. A great guy connected to this [00:06:40] machine was PR Smith who some, some of him really well, the architect and this machine was able to handle an array, like a metrics in one single instruction.

Okay. And, and by the way, very much looking like a GPU does it today. Yeah. 

[00:07:00] Matt Trifiro: And so clearly the ability to solve. Many let's call them math problems for lack of a, a better term simultaneously. And in parallel is useful when you're dealing with anything of great complexity and computational fluid dynamics.

You mentioned earlier, which is like, you're dealing ideally with you, a digital twin of whatever you're doing. You probably didn't call 'em digital twins back then, but that's essentially what you're doing. You're building a digital twin of a, a model of a weather system or a, the air flowing through past an airfoil or something.

How is that being used today? How, how is this sort of parallel processing? Because again, when in, in the world that I live in, which is the internet, it's Intel servers and GPU, and it's all this modern thing. And yet there's a, there seems to be this parallel universe in the scientific computing world, help us understand how with the sort of the, this current generation of, of parallel computing and how we think of the internet and how they're either separate or 

[00:08:00] Wolfgang Gentzsch: coming together.

Mm-hmm so our current architectures. Again, from a bird side view, they originated in the mid to late nineties with an architecture that was called be Wolf at those days. And this where commodity CPUs, commodity compute node, [00:08:20] basically interconnected, ideally by a fast interconnect today, we would say Infini bend.

And in a certain with commodity compute notes, then compute servers. You were able to turn this machine into a high performance computing system or ensure HPC. And since then this basic architecture. Hasn't changed, changed a lot. So now recently with the, uh, tier Excel scale system at the us OK. Ridge national lab, which is a $500 million machine in the meantime.

So number one on the top 500, so this the fastest, super computer in the world on a higher level, it's the same architecture, but with much more sophisticated and, uh, latest generation technologies, obviously, compared to what the be Wolf than was able to do in the late nineties, beginning two thousands.

That's interesting. And so, so 

[00:09:21] Matt Trifiro: we have this world that again, most people interact with, let's say Google, and it does feel like a parallel. To high performance computing. It's almost like the, the people that figured out how to do distributed processing within a Google data center to spot a bunch of tasks out are solving a very similar problem to what you're trying to do with parallelism at a high performance computing level.

Are they parallel universes or, or do the scientists go back and forth between let's use the same principles to solve a parallel problem? Mm-hmm regarding a, a weather system or a, or a wind tunnel versus solving a parallel problem, which is like giving somebody search [00:10:00] results faster. 

[00:10:01] Wolfgang Gentzsch: I mean, again, from a virtual I view or Google data centers, uh, doing search, for example, I mean, they are as fast as you basically figure it out, like in 0.1 second, answering your search request, they do it in a tree like.

Parallel way as well, but in high performance computing, we don't deal directly with these applications in research and industry. We are certainly solving the grand challenges, so to speak of, yeah, I hate that word, but of mankind, which is, which is a climate change. Environmental challenges. Yeah.

Earthquake and tornado forecast, for example, or yeah, efficient electrically car engines and batteries, right. Or recently we were involved in exciting challenges like heart and brain diseases that you can simulate. And as soon as you can simulate something with high accuracy, you get deeper insight and you can ideally, and, uh, this is already done with heart.

You can repair heart valves for example, which are defect, or you can look at RMIA. In the same way and identifies the best suited drugs for basically, um, calming down the arrhythmia attacks or like brain disease. Schizophrenia is, is another challenge which we take together with one of our customers and yeah, or not to [00:11:40] forget now with the virus now, virology drug design, whenever you short Matt, whenever you design and develop a product for the next generation, which you will intend to bring into the market in one or two years or so, it's all about simulating before you build it.

And that's the high performance computing that I grew up with. Basically. Yeah, 

[00:12:08] Matt Trifiro: it's, it's so interesting. It is clearly in, I mean, to me, obviously something that's needed, but it it's something that, that has largely existed in the sort of rarefied era of, of research scientists and physicists. And it's a really interesting, it's a very different than the, the young person coming outta school that goes to work for Facebook and Google and is trying to improve the, the speed of a, of a web interface.

Yeah, that's really interesting. Okay. So let's, we'll come back to high performance computing, but sure. Before we do that, I want to come back to sort of the history of this. And so one, one of the most, I think interesting topics is you and I met because I found an article that you had written or a transcript from a talk that you had given around the early days.

Well, that the, in the days of, of a product that, that was called sun grid, which was sun's utility computing system, which is arguably. One of the first cloud computing systems ever to exist. I think the timing was around when Amazon web services, when like S3 and E C two were. So there was definitely a lot of things happening at the same time, but can you tell us your version of that?

Like how, how you came to cloud, how you. Came to this, the network is the computer [00:13:20] or whatever version of that. Let's, let's go back to that. What were you doing and how did you end 

[00:13:24] Wolfgang Gentzsch: up at, so, yeah. Great, great. Really pleasure to talk about this one is in the early nineties, I founded my first software company, genius software standing for a company like istra for numerical intensive applications and super computing.

And we found out that we. We were able to interconnect work stations into a network. And with 20 or 40 work stations, we got the very first project was with Volkswagen. They had, I remember over 40 silicone graphics, workstations standing around and nobody was knowing, had, had a clue about what each of these machines were doing and which ones were idle and others too busy, et cetera.

So we helped them to interconnect them with a network to turn them so to speak into a very rude or, or crude parallel machine then, and started running at those days, codes like fire for CFT combustion simulations, then on those machines. And we, in, in the course of that project, we develop a software which we called.

Codeine computing and distributed networked environments later on called sun grid engine. Now you see the context. And so this system was basically a workflow or a workload management environment that were, was able to manage the jobs submitted and distributed onto this [00:15:00] cluster of workstations. And that's how sun grid was born.

Sun Microsystems a little bit later in the late nineties developed a 24 processor system, which they called animal farm and they were looking for, yeah, the animal farm 2024. Compute node tightly, coupled now in a copy net. And they were looking for a management environment, a management system, and they found us, looked at us and acquired my company.

The part of the, of the company that developed this grid engine, namely grid wear was called. And so this technology came into sun and was the foundation for sun's grid computing and later than basically moving into cloud computing. So that was in 2000 while, as you mentioned, Amazon's AWS started in 2006.

That was when sun already developed its N one. That was the. Build was built on, on sun grid engine, but much more virtualized than, and much more elegant and more general than, and that was the real predecessor of cloud computing. And that was in 2004. And then two, two years later, Amazon came with, it's a brilliant idea.

Or setting up AWS. So what would you 

[00:16:23] Matt Trifiro: say is the difference between what sun was working on and what Amazon eventually came out with? What's the difference? What's the insight that Amazon had that sun didn't. So 

[00:16:32] Wolfgang Gentzsch: the technology was one which was based on sun. Gray engine is more loosely distributed, loosely.[00:16:40]

Coupled through an enterprise network, for example, already, then we enhanced this technology from cluster computing to enterprise computing, then to real grid computing, wide area network connection of large computers, even, and a lot of manual interactions. And that was the reason why this technology grid computing was mainly used by researchers.

They are fearless. They don't care about manual interactions, interference, or et cetera. While the next step of cloud computing really providing a fully automated basically infrastructure of resources, which was very necessary because they already then provided different kinds of resources based on different processors, slow to very fast ones to high performance ones, depending on the application, which you moved to the.

To, to the, to the cloud. So one was more still in its infancy. I mean, you've Def definitely the grid was the predecessor of cloud and that's why there is not a real huge difference in both ways. The cloud infrastructure was completely virtualized and therefore fully automated and more, uh, and, and now I use that word, uh, democratized because almost everybody was able to use cloud resources then, which you couldn't easily say about grid grid was really for specialists in research centers.

[00:18:13] Matt Trifiro: That's really interesting. And, and to, to a very large part of the population, it still has that, that bespoke meaning that, that [00:18:20] even though what. The grid computing today, which exists in the scientific community and elsewhere looks a lot like cloud computing yeah. For scientific applications. And so my historical understanding of, of grid is, is somewhat limited.

So please correct me. My, my modern interest is to merge those worlds because I think we, we lost a lot of really interesting concepts with the metaphor of grid and some of the work that Ian foster did around autonomic computing and those sorts of things. And it's becoming, you say the manual intervention in the, the grid back in the day versus the automation that Amazon provided relative to.

What I think we need Amazon's automation is, is rudimentary because it still requires a tremendous amount of human intervention, certainly in setting it up and designing it. Now, one of the problems that the, the grid computer scientists encountered and attempted to solve, and I don't know how far they solved it was the ability to sort of pull yeah.

[00:19:19] Wolfgang Gentzsch: Hetero then, so, right. Yes, absolutely. 

[00:19:22] Matt Trifiro: And that doesn't really happen in the cloud. I mean, it's starting to happen with some of the multi-cloud work, which is interesting. It's like another level of abstraction you've got, you're virtualizing the whole cloud, as opposed to just virtualizing the machine. Do you see the, sort of the, the world of grid computing and the world of cloud computing largely staying.

I don't know, conceptually separate or in different clicks or, or do you see those worlds per uh, so 

[00:19:47] Wolfgang Gentzsch: grid computing is definitely a predecessor of cloud computing. So what in the past you were able to do with grid computing. You now can do with cloud and especially. You're [00:20:00] so right when you say it's still, the infrastructure is still quite rudimentary, but on top companies like Uber cloud, they have developed engineering simulation platforms, which span multiple clouds, even.

So you have had agen resources either in one data center, which you can pull together for solving a complex application like with pre-processing solve. So server post processing, visual presentation, et cetera. So with now, with this very highly intelligent layers on top of the application layers, which have to handle these complex engineering or research applications, they enable.

Real democratization these days, right? While, uh, in the early days with grid. I mean, in the very beginning, I think almost your first question started with you have read a paper and yes, in 2000, I remember exactly that paper, which you were quoting, where I compared a grid computing with a utility, like water, electricity, et cetera.

Now you tap into those days, you tapped into great resources at your fingertip. That was the idea, but we are still quite far away. So it was more a vision then, which has been described in that 2000 paper. But today we are there obviously, I mean, with an engineering simulation platform that I just mentioned, you can set up any resource for your specific application you have at your fingertip with a few clicks, click, click click, then you.

Build up your cluster, then [00:21:40] you move your application container. So this container technology is a intellectual property of Uber cloud that we have developed over the last seven years. And with that application container or engineering workflow container, then you move it from a repository onto that cluster.

You do your work in the same way, identical to what you are used to do on premise. Uh, so there's no learning needed. These days, which is wonderful. And engineers are used to do this on premise, and now they do the same thing in the cloud. And so when they are ready, they shut down the cluster. They don't pay anymore.

So it's pay as you go, and everything is on demand. And these containers have among quite a few, uh, BES and whistle SOS, been high performance computing, a nice feature that you can work also, not, not only batch, but interactively, right? So you can interact with your application while it's running. You can modify parameters, you can check intermediate results, throw them away, do geometry improvements and all that.

In the meantime, I mean, it's democratization at its 

[00:22:46] Matt Trifiro: best. Let's talk a little bit about high performance computing HPC. That that's a word that, that I'm not very familiar. Well, at least I, I I've come across it much more in my, my research in the last few years, but I'm not really familiar with it. The.

Term that I'm more familiar with and I'm interested in how they relate is, is super computer. I mean, I grew up in the area where, you know, we would see images of these Cray computers and they just looked so impressive. And now my, my Android phone is like 10 times more powerful than, than the original Cray.

I actually saw aray at the NSA museum in, in Maryland right next to [00:23:20] the, the, uh, the enigma machine that they , that, that they've figured out how to break. What makes a Cray, a supercomputer. And then when people say, well, now you've got a supercomputer in your hands. How does that relate to what you think of as high performance computing?

Yeah, 

[00:23:37] Wolfgang Gentzsch: obviously, that's a great question. I mean, you, you wanna look back, you mentioned Cray. So the first Cray came out in the late seventies and it was like a 20 million machine then. And it was about 100 times faster than the previous phenomenon machines. And so that's obviously a big jump and that's why people called it then a supercomputer.

So, so 

[00:24:01] Matt Trifiro: was this primary advantage at the time that it could, it could be programmed to run many, many different calculations in parallel and then combine the results. 

[00:24:11] Wolfgang Gentzsch: So it was a vector machine, right? And many physics problems, challenges, tasks they can be developed from. Uh, some sort of Newton equation, right.

Goes back 300 plus years. Right. Because when you discretized it with a finite volume finite element and other methods, because you cannot solve them in the continuum. Um, so yeah, there is no one solution, a mathematical solution available. So you solve it discreetly right. In, uh, elements in your computation domain.

Right. And that leads to matrices and to vectors and a vector machine just. So [00:25:00] you, you take the first one and then every other data comes automatically out of the memory banks and that allowed Seymour Cray then, uh, to build a machine, which was able to not only work on scaler arithmetic expressions, but on whole vectors, one instruction is for example, vector plus vector one macro instruction, right?

So that, that's why it was 100 times faster than the ointment machine. And when you look what comes afterwards, so the parallel machines, the grid, the cloud, et cetera, I mean, you simply now add more and more machines to solve the same task and with like a parallel machine with 1000 processor. So that ideally gets 1000 times faster.

And when you zoom into the processors, they are little victim machines today, right? Like your phone is kind of a hundred times faster than a crave. One system 40 years ago were the CRA 

[00:26:04] Matt Trifiro: systems, were they particularly good at sort of matrix math and the way that GPUs are today? Is there any relationship between GPUs and, and.

The vector machines. 

[00:26:13] Wolfgang Gentzsch: So you could say it's a predecessor. Yeah, absolutely. Because metrics, vector operates are obviously ideally suited for GPUs as well. So CD machine single instruction, multiple data. So CD single instruction is like a vector instruction. Multiple data is the number of elements in the vector.

Same with GPUs today. Yes, you right. Very interesting. 

[00:26:38] Matt Trifiro: Okay. So now high [00:26:40] performance computing, how does that relate to super computing? Yeah. 

[00:26:43] Wolfgang Gentzsch: Okay. I mean, supercomputer at those days and for another 20 years or so was just in the hands of highly trained researchers who could really handle these machines still programming often enough in Fortran or C, C plus plus Fortran 90, et cetera.

Right. So not for the masses. But then the pay Wolf type of parallel computers, uh, came in the market, like no 1000 compute nodes, all looking the same and built even in the beginning from the engineers themselves. Right. Who then used the machines. And that started the period of high performance computing.

So it wasn't supercomputer. And as I said, More for the researchers, but HPC is more for the masses. And when HPC started, we then had the idea to democratize high performance computing. So that basically every engineer and every research could handle not only these machines, but especially the applications on top of them.

[00:27:48] Matt Trifiro: When you say thousands of computers that all look the same, that can be sent to do these matrix math or these vector sort of problem work on lots of problems simultaneously, it sounds a lot like the cloud. What's the difference between a cloud that looks like a super computer because it has a. Millions of computers potentially, and a high performance computering system that might be sitting in a data center 

[00:28:13] Wolfgang Gentzsch: somewhere.

So a super computer is obviously a very tightly CAPD machine. So one, one of the [00:28:20] ingredients for that one is really highly, very fast interconnect. Connecting all the node in that computer. It's not ethernet at that point. And, and it now it's not, not anymore ethernet it's infin band and already now second generation or so very fast with very low latency able then to scale out.

So ideally scaling here means you have 1000 compute notes and your code runs 1000 times faster. Compared to just on one compute node. So it's linear scaling and, uh, if you do it right, and if the algorithm is well suited for that one, you get close to it. So that's a super computer, right? Tightly covered, highly efficient scaling, et cetera.

Uh, cloud itself is not necessarily scaling, right? So you have, uh, uh, a wide variety of compute notes. This is usually not the case in a super computer and a super computer. All the node are looking alike, ideally for very good reasons, no time to go into details with that one here. And so a cloud on the other hand has.

At least, I mean over 100 different compute notes, but, uh, you mentioned GPU's also F PJs, but almost any kind of CPUs based on Intel or EMD architecture, for example, but with the Uber cloud engineers simulation platform, you can basically pick and, and choose and put your system together at your fingertip.

All kinds of different nodes, which you need, you might need their GPUs for acceleration might standard out [00:30:00] CPUs for doing the broader calculations, et cetera. And so the cloud is much more flexible. A computer, a supercomputer is not flexible, right? It, it, it just does one thing really good, which is, um, structured arithmetic, operation, multiple data, but the cloud can do basically everything for every application.

You can choose as a right compute nodes and the interconnect, and also how you get the data in and out. And so 

[00:30:27] Matt Trifiro: I did wanna talk a bit about Uber cloud. So, so tell me Uber, cloud's your company that, that you're running today? What, what problem are you? Are 

[00:30:35] Wolfgang Gentzsch: you. I mean, we are taking the engineers simulation workflows as they are not changing anything, not touching anything.

We put it in a special HPC, high performance computing container that we have developed over the last eight years. And then these containers they' sitting in customers and the users repository like a library of application containers. And whenever the engineer wants to do, for example, an SLU computation, physics, fluid dynamics, uh, simulation, then he takes the container out of the shelf, puts it on the cloud engineering simulation platform and then ready to.

Right. It is basically no learning needed. It's the same look and feel and working environment in the cloud as the user is used to have the on-premise environment. 

[00:31:29] Matt Trifiro: Are you replacing super computers with the cloud 

[00:31:32] Wolfgang Gentzsch: in essence? Yeah. And again, Matt, I, I wanna insist on not using super computers in that context [00:31:40] because the engineers in the industry these days, I mean, they are very often happy with a few hundred.

Course and a dozen of compute node, et cetera, because that is already 10 times faster than what he or she is used on premise. Right? So factor of 10 is already good to get from days to hours or from hours to minutes, then to the result, beating up your whole simulation process. Then by, by factors, we have customers who are 10 times faster than on-premise.

Others are even 50 times faster than on premise because they use a bunch of GPUs that they don't have on-premise for example. So they Excel and, and, and like eight, for example. Yeah. 

[00:32:23] Matt Trifiro: Now, now to the typical HPC workflows that, that people build today that are running on premises. I mean, there still are super computers, but there's still millions of dollars and they're orders of magnitude.

More powerful than yeah. Than what existed before. And probably most scientists don't have access to those unless you work for the government or some very, very large research organizations. Yes, that's right. But, so, so you're saying that most of the users of Uber cloud are performing their HPC workloads on premises, on like.

Typical off the shelf, Intel servers. And so you're saying, look, we can now take what you do on premises and we can run on the cloud. It's pay as you go, and you don't have to change your workloads at all. You can store things in a repository. You called it a container. Is it related to. Linux containers, or you just said just a metaphor that you're using.

[00:33:11] Wolfgang Gentzsch: No, no. It is related to Linux containers. And very intentionally we use in the early days we use standard [00:33:20] Docker runtime environment and put 32 layers on top and each layer is kind of an HPC layer, right? So for Infini band. So these containers, they talk to Infini band NPI message, passing interface, and a lot more, right.

So they are especially built for, uh, high performance computing workload and also underneath for HPC environment. So these, uh, this platform is infrastructure agnostic, so you can seamlessly move it. One hyperscale to the other hyperscale without changing anything on one end. So this is, uh, below and on top, then above the containers are able to host any kind of engineering workload, be it simple solvers, like I'm just fluent or da, so Abacos, or be it highly complex work flow complex workflows with pre-procesing solver, solve solver, you know, when it comes to multiphysics, then on post processing and remote visualization all is in the container.

So you don't have to basically take care of your, your workflow itself, building it, or it's all in the container. It's ready to go. And it's interactive. So you can interact with, uh, simulation and improve parameters, modify, do multiphysics, do digital twins do, uh, even, even data analytics. For example, we have the analytic part is very compute intensive.

And how, how does 

Uber 

[00:34:46] Matt Trifiro: cloud determine how many cloud computers it's going to. Spread the workloads around on, cause you say a container. I, I tend to think of a single container running alongside, usually other [00:35:00] containers on a single machine. And then I think of like clusters, right? I think of, you know, let's just take Kubernetes or something, right.

Where you have multiple containers running on sometimes the same machine, sometimes other machines with network together, help help me understand how Uber cloud relates to the sort of the, the clustering orchestrators that exist today. Like Kubernetes. 

[00:35:18] Wolfgang Gentzsch: Excellent question. And your point to a USP, we call it a unique selling point or unique feature, which is we don't run multi 10.

So that's when you have yeah. Multiple containers on a single machine. An engineer would not really like to work on a machine where. Her competitors are working as well. Right. They wanna have a single machine and a single machine can consist of many node. Each node hosts, one container, and these containers are talking to each other because they are solving one and the same problem.

So they need to work kind of, kind of in tandem, uh, together. 

[00:35:58] Matt Trifiro: And do you spread that scientist that, you know, hypothetical scientist? Do you spread her workloads? Do you just, is it just on a single, single tenant machine or do you take multiple cloud machines, all single tenant and then treat them as a cluster where you're running, where you, I can, I can sort of scale out the workload, like I would a cloud workload.

Okay. Yeah. Yeah. 

[00:36:18] Wolfgang Gentzsch: And it's a single tenant environment. Yeah, very correct. And each compute node hosts, one of the 100 containers, which together build your environment and the engineer is dealing with it. One system. Yeah. 

[00:36:36] Matt Trifiro: And it, and it seems like one of the most powerful value [00:36:40] propositions is that you to the scientists who are used to building the workloads today on premises, you can, it's just lift and shift no changes.

It runs on there. Now, one of the things that, that, so just outta curiosity, what's the largest workload that anybody's run on you. I cloud how, how many machines let's just make it simple. 

[00:36:58] Wolfgang Gentzsch: Yeah. Okay. There are, there are few engineers, you know, we call 'em power users, right? They, uh, increase the workload or say, for example, is the decrease of freedom, the number of finite elements, the number of finite volumes, et cetera, successively until the, uh, I would say the application does not scale anymore really well.

When you add more resources, you won't get far with that one. So it's like, uh, first it's nicely linear, increasing. And then at one point in time system is, um, exhausted. So to speak. That's 

[00:37:35] Matt Trifiro: when you have to go buy a supercomputer, right? You gotta take out your check. 

[00:37:39] Wolfgang Gentzsch: And that's when you have to, yeah. The supercomputer could do it a little bit better because of the tightly covered interconnect, which is basically the most expensive part of the machine.

Right. Then, uh, you won't easily do that in the cloud. The approach for the injury is usually pragmatic. Meaning, uh, I mean, you, you don't use too many node for notes for slightly increasing, uh, uh, efficiencies then. So you, you are wisely making use of that. Also your manager wouldn't really like when you just try it to use infinite number of resources.

Right. That can, 

[00:38:12] Matt Trifiro: so, so are we talking once of computers, dozens, hundreds, thousands, Now what's the scale of these 

[00:38:18] Wolfgang Gentzsch: workloads. Yeah, it's [00:38:20] usually, yeah. So the average, uh, user really the average user uses between eight and 64 machines. Some use more just. Turn it into course, usually with the latest CPU technologies, you can have about 100 course for compute node, and then you get to 10,000 course.

Yeah. I was gonna say 10, 11,000 

[00:38:45] Matt Trifiro: cores pretty easily. 

[00:38:46] Wolfgang Gentzsch: Yeah. Yeah. This is, this is a lot. Yeah. And these days, you know, when it comes to artificial intelligence, like, uh, deep learning machine learnings, and you can do the acceleration Dramat. Faster than with neuro networks do training for the neuro networks and then get a highly accurate prediction instead of the full simulation, which takes hours.

You do a prediction, which takes two to three seconds. And the result is about 95 accurate. Yeah. In fact, 

[00:39:18] Matt Trifiro: one of the use cases that you, you sent to me in your email, I'd like you to tell the audience about which is this manufacturer. Can you tell us the story of that manufacturer and how they're using 

[00:39:28] Wolfgang Gentzsch: Uber?

I guess you're talking about the living heart project for the, uh, heart valve. We have done this loud study with the company and they were providing the code, you know, the Abaco simulation code for this for, for three DT holdings, heart valve simulation. So it's about hot valve repair and we have done 3000 simulations on 3000 clusters on the Google cloud platform.

And. We have [00:40:00] used these 3000 results to train a neural network algorithm. That then was able indeed, as I mentioned, uh, to turn a 10 hour to 20 hour hard, full heart simulation fluid structure interaction, then into two to three seconds prediction with over 95% accuracy. And just for the curious audience, that whole project, the resource consumption was just under $20,000, which is compared to the re result you achieved it's peanuts, right?

I mean, it's amazing. You know, this is for a real time hard valve operation. Simulation for the doctors, right. In the future, it has to be still approved by FDA, the food and drug association in the us. 

[00:40:55] Matt Trifiro: Yeah. Act actually the, the, the one I was referring to that's super stink that and the ability to, to run health simulation, or at least to train the models.

I mean, that's so, so that's okay. That's an interesting dynamic, right? When you talk about like the, the full continuum from the thing to the cloud, right. And you've got the things at the edge, and you've got things that are, that are centralized. Most of these workloads are running in centralized environments.

They're running out in some region somewhere, and I can train these very complex models using high performance comput. Techniques, and then I can push the results of those models down to the edge for fast infer in low latency environments. And one of the, the examples of that, that you provided was [00:41:40] autonomous driving, which we've talked about a million times we only talked about, but the one that, that was really interesting was just manufacturing that manufactured, that builds like 70,000 parts an hour.

Yeah. So tell, tell us if you can tell us who it is. That's great. If you can't just describe what their business is and like what they're trying to accomplish and what problem they had. Yeah. That's the one that I thought was super interesting. 

[00:41:59] Wolfgang Gentzsch: Yes. Okay. So this is the market leader in building, and I wanna be careful, I can't mention the name because everybody then immediately says, oh, these guys.

Yes. So it's the market leader in producing 70,000 parts per hour. Right. And everybody uses these parts right. Clearly. And they have hundreds of customers using such a 70,000 parts. Building developing machine. Right. And this machines on million. So customers buy the machines, they buy the exactly. They buy the machine to make, and they make, use these parts to make their products 

[00:42:38] Matt Trifiro: out of that.

Right. All right. So widget maker, it's a widget maker and it makes 70,000, like a single machine makes 70,000 widgets. Yes, yes, yes. Per hour. Perfect. That's a, that's a lot of, that's a lot of widgets. 

[00:42:47] Wolfgang Gentzsch: Yes. And there are many parts of this machine which can break. Right. And they have equipped this machine with hundreds of sensors.

They listen this measure, they measure temperature and the noise and whatever, and report it now. Okay. It's, it's local. It environment is an edge, obviously, right? Sure. It's responsible for orchestrating all these sensor and getting the essence out of, uh, their data and then sending it to the cloud and the cloud use it for training and machine learning algorithm and [00:43:20] gets more and more and more intelligent based on that, because there are hundreds of machines out that report, any.

Deviation any, any strange noise, any not standard behavior, so to speak, right? The idea is really, and it's just, uh, in the makes now the idea is that then as soon as there is a new noise, uh, discovered on machine X, somewhere in Brazil, right. And this gets reported to the central brain in the cloud and that brain discovers, oh yes, it is exactly part 777, which broke recently with the other customer.

And then, okay. Let's ship that part. And, uh, so that they can basically controlled. Push, you know, bring down the machine, do the repair and then bring it up again. Usually this takes maybe one or two hours while if it really breaks, then the machine stands still usually for days until, well, I mean, that 

[00:44:21] Matt Trifiro: is, that is the, you look at manufacturers and that is the, the number one UN unintended cost factor is, is just stopped the production line.

Yeah. It's millions. It's huge. Yeah. Yeah. Well, yeah. Yeah. Imagine that made every hour. It's 70,000 parts you didn't make. Yeah. That's exactly. That's a lot of parts. Yeah. And, and so, and that general model for how to approach. Doing predictive analysis on complex systems. Right? First of all, it can be applied to anything, whether it's a, you know, you talk about GE does it to their jet engines and their trains.

You could do it with a bridge, you could do it with a car, you could do it, all these things. Yeah. And it seems like, like the model that's really promising is [00:45:00] using the edge for the, which is relatively expensive, quote unquote, right? Because you've got batteries that you may have to keep charged or minimize energy use.

You've got the expense of putting like all these processors out there to do this stuff. And so you wanna make the things of the edge super focused and super efficient. Whereas the things in the cloud have more luxury to run these longer processes and these across many, many computers. And so you can train these models in the cloud and then.

Push the intelligence or the heuristics of the inferences, the edge, yeah. Down to the edge. And whether that's a better algorithm for a smart camera to detect a firearm or yeah, for example, a, uh, a new way for a manufacturer to detect a vibration that they didn't notice before that then indicates that a part probably needs to be replaced.

That seems completely transformative to everything in our lives. Tell me how you see that. Like, what do you see that changing our lives? I mean, you mentioned healthcare, which is super interesting. You can, you sort of like, you know, be speculative a little and just imagine where, where we as an industry could take this.

[00:46:05] Wolfgang Gentzsch: Yeah, high performance computing itself is growing. I would say, you know, because it's part of our business, it's grow nicely. It's 77 to extent per a year average annual growth cloud computing cloud H HPC cloud yet. So it's just really is, that's a difference, right? HPC cloud, uh, is growing about 18, 19, 20% per year.

So at least double. And then there are at the horizon now coming up new applications, which there are early adopters already working with them. They're not like digital twins, [00:46:40] for example, where you really need, you need real time to basically accompany a physical twin. Like, like the machine that we talk about with a ditcher twin, which is ideally, at least as fast as the physical twin is working so that you can watch and, you know, survey does that mean that you 

[00:47:01] Matt Trifiro: see.

You see HPC workloads eventually moving to the edge because you need to, to run 

[00:47:08] Wolfgang Gentzsch: and they are already, yeah. Very good. Gimme an example of that. Yeah. I mean, I mean, you want this, the same example as, as previously this company, which that we mentioned has built the edge. For every machine, you know, they have an edge system, an edge environment.

That one is already intelligent. So locally, there is a lot of computing already to get some information out of the data and this information then can be sent to the cloud and further digested, so to speak. So there is HPC at the edge and certainly more massive HPCs than, than in the cloud. And as I said, so you need that for digital twin, but you also need machine learning for digital twin because you want to be your algorithm, your machine learning algorithm, for example, even a bit faster.

Then the physical machine to do prediction, right. Uh, to do highly accurate prediction for what could happen potentially so that you can fix it. And, and then, yeah, obviously healthcare is another big topic these days growing dramatically, because all these machines you see at your dentist or in the hospital, you might have had the [00:48:20] luxury to benefit from it already.

They, they get much more intelligent now. I mean, you know, they, they recognize early what problems you have, especially internally when you can't really look into you, et cetera. So, but then there is a very important part of many of these things is data analytics. You get. Tons. So, I mean, we are facing now data deluge, right?

So you need to analyze these data. And, uh, so to get real useful information out of billions and billions of, of data coming from also millions of sensor source, for example, and then there are, you know, there are more areas coming up. The horizon, the autonomous driving is, is, is another one. I mean, uh, you know, these, these, and, and this is really 

[00:49:11] Matt Trifiro: just everything in our world.

I mean, even like my life today where I don't have an autonomous car, I drive my own car along with everybody else on the road. And that's a huge, you know, fluid dynamics problem. Right. I'm just imagining how much more efficient we could get with. Everything. I mean, let's just take roads or waste management or, or any of things just by building these digital twins and refining these algorithms.

Okay. So day to day lose lots of day gets created probably too much to affordably and certainly not in real time, ship it back to the cloud cuz the cloud might be 150 milliseconds away. And so now obviously we can move the cloud out farther to the edge, which is happening already today and we can move it.

In fact, we can take an Amazon outpost and stick it on premises. And now you have a cloud on, on premises. Do you see HPC workloads running in realtime [00:50:00] environments or do you see HPC workloads running alongside real entire environments providing this? These I'm just trying to understand, do you, do you see these, the, the HP scientists, your customers adapting their.

To run in real time. But I mean, real time, I mean, I mean, ones of microseconds, right? Maybe even, well, hundreds of microseconds, let's say maybe ones of one millisecond, two milliseconds, something like that. Yeah. Yeah. Do you see HPC workloads running like that? Or were HPC always be kind of like the slower, more methodical when I say slower that's relevant?

No, no, no. Yeah. You're completely not real. No. Is that, is that 

[00:50:33] Wolfgang Gentzsch: how that, that was gonna only, yeah, my simpler answer is yes, exactly. As exactly on I'm trying to explain this. No, no, no, exactly. As you describe, I mean, otherwise people wouldn't buy these 500 million machines, right? The Excel flop machines, these days that perform, you know, Excel who, 

[00:50:50] Matt Trifiro: per, who, who makes these machine.

Who makes these ex 

[00:50:52] Wolfgang Gentzsch: AFL machine previous Cray. So, which is now in the HPE Harbor, so to speak. Okay. Uh, it was simply too expensive for a smaller company, relatively small company. Yeah. Like Cray to build such beasts. I mean, you need a lot of money backing up the whole development design. You only have to sell one of them though.

One a year. yeah, yeah, yeah, yeah, yeah. It's uh, sure. But you need more than one, one per year, right? Uh, yeah. I mean, uh, we see more and more systems now coming in Excel flow machines. I mean the first have already been installed. Oak ation and app is, is, is one of those. With my dear friend, Jack donga. And so then there are three, three more coming in the us, there are [00:51:40] three more coming in in Europe.

Chinese might have two very soon. And, and at one point in time, you know, like the petaflop machines, previous generation factor of 1000, basically in speed, up 10 years old. Now we had hundreds of plop machines recently. Right. So, you know, they are all at one point I'm going into mainstream. Yeah. But we are not talking.

Except flow machine. I, I wanna say that, do you see 

[00:52:04] Matt Trifiro: a future where one or more of the major cloud providers buys one of these super computers and 

[00:52:09] Wolfgang Gentzsch: offers it? Yeah. I mean, companies like Microsoft on Azure, they indeed have already aray, not that huge machine. They have a closed collaboration with HPE slash create, but you have to buy or you, you, I mean, or ideally, I mean, you, you, you wanna buy the full, the whole system, right?

This is very expensive. Right? Okay. Yeah. Uh, it's not the average engineer who is already happy with 500 or 1000 course, and this is affordable because you can use, if you do it right, you can use spot instances, which you pay only 20% list price, or reserved instances where you pay only 50% list price, uh, for the right application.

Right? So currently the price or the cost of cloud is in the same order as the cost. Of on premise systems five years ago, it was a factor of five difference today it's melted down. So the cloud cost melted down because of always fastest and latest technology then to break even basically. So now that cloud is similarly costly.

So to speak as on premise, then you can look at the other benefits, [00:53:20] right? So the other benefits, I mean, for example, time to market, right? Higher quality, because you can do more, more parameters, more materials, et cetera, and, and get a better product where then that means you are more competitive than come coming with a better product into the market.

Ideally, two months earlier, Than the others. That's a lot of benefit. And also innovation is another benefit, but you can innovate at your fingertips these days. You don't have to build 2, 3, 4, 5 models and crash 'em against the wall or whatever, which cause a million per crash. So to speak. Now you do it in the cloud, which might cost a thousand dollars or $5,000 even, but it's much, much, much cheaper then.

So there are a ton of benefits these days when you move to the cloud. 

[00:54:14] Matt Trifiro: So one of the things that I've been wanting to ask you about, because there aren't that many people whose careers have spanned such a, such a, a wide swath of the modern world and you joined sun back when they had the used the, the phrase, the network is the computer, which was, you know, still provocative, right.

And evenly enough, what Oracle bought them and they lost the trademark CloudFlare. registered the trademark, the networks computer. And then, like I said, you wrote an article or it was a transcript, but you said the internet is the computer. And I'm curious, what, what does the network is the computer mean?

And what does the internet is the computer mean to. 

[00:54:55] Wolfgang Gentzsch: Yeah. So the sentence, the network is, the computer [00:55:00] has been coined by Scott. Mcneley one of the co-founders of sun Microsystems. And one point in time, my boss, he coin when they. Started chipping the workstation. Right? So the workstation, you know, that, that, that was a novelty was connected to a network.

Right. So that these machines could basically talk to each other. Yeah. And then when I came into Z, 15 years later, that was in 2000, our technology, we called it grid engine. So it was basically the era of grid of grid computing. And we talked, uh, previously about this before. So the grid networked together, larger machines, then not only workstations by the way.

So my company then was called grid wear in at those days that has been acquired by sun Microsystems in 2000. And this was a grid operating system, so to speak so that you could move simulations to the least loaded machine. So you don't, you didn't have to look into the machine how loaded it was, but the grid engine was able, it's a workload management system, right.

That was able to basically know almost everything about this. Loosely cup cluster of either work stations. It was one of the first global schedules. Yeah. Yeah. It was basically, it was, it was a, was a global. It is, it is also used even these days, you know, grid engine went and through sun Microsystems to Oracle to Univa until now it has been acquired recently by Alta an engineering simulation firm.

And they [00:56:40] use it for, so sum of your code is still there. It's still there. And even the people, all the people moved except me. Oh, interest. I got out of that part after four years doing other very nice things, mainly real grid things. So buildings in North Carolina statewide grid, for example, or the, uh, German D grid, then the government called me back and then European grids, like za and EU dot.

So, so that was the, the era of grid computing and was loosely coupled. And it was. Obviously, I mean, now with this explanation, it was certainly a predecessor of the cloud. The cloud is also, I mean, certainly, I mean like, like Microsoft Azure, they have over 60 data centers and each data center has thousands and thousands of machines.

And some portion of that is high performance servers, which you can use for engineering complex engineering simulations. And yeah, so, so that was basically the cloud is an evolution of the grid. Then with a much more dynamic, flexible, agile approach by grid was still basically a bunch of traditional machines put together, connected together in a network.

And that's why we called it the grid and the cloud made use of the internet. Right. So these compute notes. In the grid, the grid was part of the internet because you, you transferred data through the, over the internet. So that's why I took Scott magni sentence and turned it into the greatest network like that.

So the grid now, the cloud, the net, yeah, 

[00:58:17] Matt Trifiro: in a way it's, it's, I mean, it's imperfect metaphor [00:58:20] because obviously the network is not the computer. The computer is the computer, but I, I. To me, what it means is we have things in our common life, in our, our day to day that we want to actually run on multiple computers because it's more efficient.

Yeah. Or more, you know, either from a speed standpoint or cost standpoint or, or something else. And obviously the network crucial part of that. In fact, sometimes the most in crucial part, cuz you can't get a workload to another machine without it being in the network. So when I think of like the network is computer, the Internet's computer, it's almost like it is like the cloud, but it's like, you're throwing your workload onto this, this network of computers and it's figuring out how, how to run it, which is sort of what Uber cloud does.

Okay. That's that's super cool. That's super cool. Okay. And then the last thing I wanna ask you is, again, you've been in this industry a long time. You've seen a lot of change. There's a lot of change coming. What's what's most exciting to you? Like what, what change are you hoping for to happen in your lifetime?

That's most exciting to you? Yeah. 

[00:59:22] Wolfgang Gentzsch: Now HPC is really in the hands of everybody. Everybody here means, you know, engineers and scientists, like a few decades ago, it was only handed made given into the hands of specialists. Right. And that opens up the door for so many. New applications and making any kind of, you know, research or products basically coming out much faster so that this exponential acceleration, which we are in for quite some time that will continue and will [01:00:00] help us to solve problems, real problems.

And I say, not challenges much, sorry. I mean like in, in healthcare, for example, or climate and weather forecast and also new technologies. Yeah. Electrical cars, autonomous driving and all that stuff. So successively making our lives even more convenient, more comfortable, and also solving mankind problems, which we are facing.

Do you think 

[01:00:33] Matt Trifiro: we, we have in the, the research enclaves. The necessary knowledge and algorithms to solve these problems. And what we mostly need is the democratization. We need to be able to run them faster and cheaper. Or do you believe there's still some like basic science innovation that 

[01:00:51] Wolfgang Gentzsch: needs oh, absolutely.

I mean, on the hardware side or architecture side, we will see quantum computing, but quantum computing is not very suited for everything, right? So there are specialized applications, which run thousand times, or even more faster than on conventional machines. So we see that one, but people certainly wanna try out or one of one get that benefit from quantum computing and developing new algorithms for that one.

So that's the software side. Yeah. The software side is already is always a little bit behind because I mean, it's a real challenge. To develop a 10 million lines of code software from scratch, right? You won't easily tackle that. You start with small solars and then you grow and [01:01:40] grow from two and three dimension dimensions into four dimensions.

So that has been the case already for the last 50 years. And we will definitely accelerate. So the most exciting thing is really that we see healthcare advancing dramatically. Maybe we solve and not, not maybe at one point in time, we know much more about viruses and about cancer and about for example, as the brain, I mean, there is a huge brain project here in Europe really shows the, the research center you're just deeply involved in that.

For example, there is a brain simulation conference every year. About these challenging things. We had a very nice application on schizophrenia, right? Instead of hammering holes into your brain, now you do it from the outside with a ton of high performance computing supporting that. Right. And I mean, unimaginably 

[01:02:36] Matt Trifiro: many.

Yeah. What a fascinating world we're stepping into Wolfgang. This has been a really great wide conversation. That was a mixture of history and present and future. I'm super excited. If people are interested in finding you and or Uber cloud online, where should they go? 

[01:02:53] Wolfgang Gentzsch: They can, they can either, either look on LinkedIn, find me under my name GaN, or we have certainly a help site on our website where they could type in a few questions or whatever.

So we are very open and, uh, look everywhere for these kind of questions and help we can provide. That's awesome. 

[01:03:15] Matt Trifiro: Thank you very much. 

[01:03:16] Narrator 2: That does it for this episode of over the edge. If you're enjoying the show, [01:03:20] please leave a rating in a review and tell a friend over the edge is made possible through the generous sponsorship of our partners at Dell technologies.

Simplify your edge so you can generate more value. Learn more by visiting dell.com.