Over The Edge

Data and The Last Mile Problem with Jay Limbasiya, Global AI & Data Science Business Development Lead at Dell Technologies

Episode Summary

This episode of Over the Edge features an interview with Jay Limbasiya, Global AI & Data Science business Development Lead at Dell Technologies. Jay specializes in AI, data science, analytics, and data management. Matt and Jay discuss why we should care about the edge, the complexities of deciding where to treat data and the last mile problem.

Episode Notes

This episode of Over the Edge features an interview between Matt Trifiro and Jay Limbasiya, Global AI & Data Science business Development Lead at Dell Technologies. With a background in the United States Intelligence Community, Jay specializes in AI, data science, analytics, and data management. 

Matt and Jay discuss why we should care about the edge, the complexities of deciding where to treat data and the last mile problem. They also talk about the benefits of data lakehouses and the current debate around the dangers of AI. 

---------

Key Quotes:

“You can use this data maybe ten years down the road for something else, right? You can use it for a completely different type of use case you never even thought of. And that's why the data is so valuable. It's not so much the algorithm itself, it's can you capture this data and even save this data for maybe technology that hasn't even come out yet?”

“It definitely doesn't make sense to me to completely say, Hey, the data's too large at the edge, so we're just not gonna consume it… that obviously doesn't make sense. I think there's a efficiency play out here as well. And that efficiency play is, obviously it costs money to store data, so let's make sure we store the right data.”

“What's the value of the data that you're collecting and how can these pieces of data enhance the algorithm at the edge? Because even though you deploy something on the edge in order to refine it for future trends, maybe four years down the road, or three years down the road, you still need data to be able to kind of do that R and D work.”

“The core is actually where you identify, is there data I'm missing? Is there something I haven't even collected?”

---------

Show Timestamps:

(02:30) Jay’s start in technology

(07:01) Working in U.S. intelligence

(09:00) Why we should care about edge?

(13:26) How to make decisions about managing and treating data

(18:45) The Last Mile Problem 

(26:00) Data cleansing

(29:00) Data analysis at the edge 

(30:00) Training the models and pushing them towards the edge

(35:14) Data lakehouses

(44:42) Jay’s thoughts on the dangers of AI

--------

Sponsor:

Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting DellTechnologies.com/SimplifyYourEdge for more information or click on the link in the show notes.

--------

Links:

Follow Matt Trifiro on Twitter: https://twitter.com/mtrifiro

Follow Matt Trifiro on LinkedIn: http://linkedin.com/in/mtrifiro

Connect with Jay Limbasiya on LinkedIn: https://www.linkedin.com/in/jaylimbasiya/

www.CaspianStudios.com

Episode Transcription

Narrator 1: [00:00:00] Hello, and welcome to Over the Edge. This episode features an interview between Matt Truro and Jay Leia, global AI and Data Science business development lead at Dell Technologies. With a background in the United States Intelligence community, Jay specializes in ai. Data science, analytics, and data management.

Matt at Jay discuss why we should care about the edge, the complexities of deciding where to treat data in the last mile problem. They also talk about the benefits of data lakehouse and the current debate around the dangers of ai. Before we get into it, here's a brief word from our sponsors. 

Narrator 2: Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions, from hardware and software to data and operations across your entire multi-cloud environment.

We're here to help you simplify your edge so that you can generate more value. Learn more by visiting Dell [00:01:00] technologies.com/simplify your edge for more information or click on the link in the show notes.

Matt Trifiro: Two years ago when I started the Over the Edge podcast, it was all about edge computing. That's all anybody can talk about.

But since then I've realized the edge is part of a much larger revolution. That's why I'm pretty proud to be one of the founding leaders of a nonprofit organization called the Open Grid Alliance for oga. The OGA is all about incorporating the best of edge technologies across the entire spectrum of connectivity.

From the centralized data center to the end user devices, the open grid will span the globe and it will improve performance and economics of new services like private 5G and smart retail. If you want to be part of the open grid movement, I suggest you start@opengridalliance.org where you can download the original Open Grid manifesto and learn about the organization's recent projects and activities, including the launch of its first innovation zone in Las Vegas, Nevada.

Narrator 1: And now please enjoy this interview between Matt Trafiro and Jay Limbasiya Global AI and Data Science Business Development lead at Dell Technologies. [00:02:00]

Matt Trifiro: Hey Jay, how you doing today? 

Jay Limbasiya: Good. I'm doing fantastic. How are you? I'm great.

Matt Trifiro: I wanted to go way back. How'd you even get involved in technology? What was the spark?

Jay Limbasiya: So I'll give you a little background. My parents specifically are the reason why I got involved in technology. My parents are immigrants that came to the United States way back when. My dad, Carson Lomia, he was a civil engineer and he grew up with nothing. We were fairly, you know, weren't doing very well when we came to the United States.

And the way he sparked my interest was that he had to build his name up again from the ground up, coming to a new country and. The way he would spark my interest is he would bring home these old computers from his work that they were gonna throw away. And I remember being in like fifth grade, you know, with these old IBM towers that were like barely turning on with green screens and you know, just being fascinated like, wow, like what is this?

What is this floppy, massive disc with holes in it that I gotta put in and that's what's gonna run my program That, you know, whatever's, this is like mid eighties? Yeah, like the mid eighties, mid nineties. I remember [00:03:00] taking it all apart individually, taking out each bolt, each screw, you know, and bagging 'em up.

And then as the years went on, I remember him saying, well, that's only half the fun. The other half you haven't even tried it, which is putting it all back together. That's where I think I sparked my interest. You know, for me, technology wasn't like others. Right. It wasn't at my favorite tips. My parents didn't have internet at the house till, you know, high speed internet wasn't a thing in our house.

Right. Until I was in like 10th grade. Right. It was too expensive. It just wasn't something we needed. Yeah. And that's only because my sister went to college and she needed to write her applications to go to school. So when I started going down this path of like technology and understanding, well, What fuels these computers?

What's data? What am I getting out of this and why is this even important? That's when I started getting connected to things like MySpace back in the day and understanding like there's multiple uses to technology, right? It's not just business use, which is what I was used to since my dad would bring home these computers from work.

But now you can use it for networking. You use it for school, you can use it for [00:04:00] research and you know, educating yourself. And that's really how I started is I didn't go to formal school at 14 to code. I did it on my own and it was primarily cuz I was interested. 

Matt Trifiro: What was the first language you programmed in?

Jay Limbasiya: C.  Yeah. I know universities start you off in Python and I, I was, I mean, I mean

Matt Trifiro: now, but yeah, C was the language. It was the one that had the best compilers and Yeah, I mean, I get it. Yeah, I get it. Yeah. I still miss C. 

Jay Limbasiya: That was my wholestory, and things just kind of grew from there.

It turned out that I didn't actually think you can make money from being interested in technology, but here I am to say that your hobbies can and your passions can actually be something you do for work. Yeah,

Matt Trifiro: that's cool. And as your career's developed, it's sort of developed along some really interesting path lines and data and ai.

Seemed to have become. So tell me that evolution, how did that become your specialty?

Jay Limbasiya: My career originally began in the intelligence community, right? The United States Intelligence communities. Oh, okay. And it was heavily focused around data, all things. [00:05:00] Data is what fuels intelligence and the analytics behind that data as what helps us.

Can you describe

Matt Trifiro: the work you were doing vaguely enough

Jay Limbasiya: to not get you in trouble? I think a lot of the work was taking data and making sense of it so it could be used in operations. It could be everything from identifying subjects of interest to behavioral patterns, to even something as simple as, I know we're gonna get into edge here, but even something as simple as, can you identify a defect on an asset that could be being used in the field?

And can you identify it before that defect even happens? Right.

Matt Trifiro: So are you just, every so often somebody would show up with a project and a data set and say, we wouldn't need to see if you can figure this thing out of this or what.

Jay Limbasiya: It was interesting cuz I had a lot of great mentors and originally it started off in really just these basic projects where, you know, if you remember when VMware came out and they had vCenter and the government had bought vCenter and they said, well hey, is there a way that you could.

Build a custom monitoring application that monitors your physical [00:06:00] and virtual machines all in one platform. That was like the big Wow. You can monitor your virtual machines at the same time as your physical machines. This is amazing. Right? Yeah. And that led to me kind of sticking my nose into places that probably I shouldn't have.

And I'm glad I did because it opened up new projects. Right. And yeah, it was, it was kind of like that. It was kind of like, Hey Jay, like, would you be interested in doing this project? And this is the data set and this is what we know. And I was always like, yeah, why not? Yeah, you're the crazy one. Oh yeah.

Let's, let's

Matt Trifiro: one to Jay. Okay, so, so you started the intelligence community sort of cutting your teeth on data science and those things. And then what?

Jay Limbasiya: Well, I was doing school at the same time while I was doing school. At the same time, I think my career grew in the intelligence community. I started learning a lot of stuff.

Not just data, but hardware itself. You know, I knew Dell before I even knew I could work for Dell. Building out racks was one of the things that I did, right? It's not the most fascinating of works, but it's something I learned. So I just kind of left my doors open to anything, right? Mm-hmm. And I learned all different types of technologies, but when I came outta school, I.[00:07:00]

I was at a point where I was like, well, I wanna try something different, right? I hadn't worked at a startup, so I wanted, you know, I had friends who are out west, they're showing me all these amazing nap pods, stock options, they're enjoying, right? Stock options and unlimited food. And I was like, I want that.

This is amazing. Why am I here? You know? And so I kind of transitioned to the corporate side of the house. I went to a kind of a consulting firm where I got my hands dirty with multiple different types of industries, right? Everything from healthcare to automotive, you name it. It was interesting cuz I'd only worked in the defense sector for so long.

I was 18 when I started, essentially. Yeah. So. It was interesting cuz I got to tackle completely new problems. But then again, that also, I wanna say, made the rough edges get a lot smoother. When you're an engineer, you're essentially learning in a very black and white world. It's very math oriented, but there is a sense of creativity that is very important to innovation and especially data science and ai.

That I think you get from working in environments that make you uncomfortable. And so, you know what I mean [00:08:00] by that is like I learned why it's so important for having a, a proper ux, the output of your application, you know, how to make it easier for customers and consumers to be able to make sense of what they're looking at and how different things affect that, right?

Everything from color to positioning of data on a page, those are things that most. I think most individuals don't think about, right? And they think, well, let's just slap this on a Power BI dashboard and voila, it's just gonna magically influence business. There's a missing piece to that, which is how can you really help the end user understand what they're looking at, not just put data on a piece of paper.

Right.

Matt Trifiro: Yeah. So, so turning data into information or action or, yeah. Right. Exactly. Some, some change. Changeable. Yeah, some insight or something like that. Right, right. Richard, what's your current position? What are you doing now? I'm

Jay Limbasiya: a business development manager. I lead the global AI analytics and data management side of the house for our unstructured data solutions business here at Dell.

It's been wonderful. It's been great. Great. So [00:09:00] let's talk

Matt Trifiro: a little bit about Edge. I mentioned before we got on the interview that's middle of season three and we haven't really talked about the definition of Edge in a long time, and I sort of want to go over some of that rudimentary stuff cause I think my, my listeners will appreciate it.

Why should I care about Edge? Let's start there. Why should I care about Edge?

Jay Limbasiya: The primary reason you should care about Edge is because how far technology has evolved. That edge is actually not only your consumer of data, but it's also your generation of data. And in this world, data has came, right? So let's, let's, let's

Matt Trifiro: talk about that.

Let's talk about that. Okay, so, so what does it mean to consume data and what does it mean to create data and why now at the Edge,

Jay Limbasiya: Well, I think as edge devices themselves have become more refined, and I wouldn't say the connectivity and the security of them have have been more refined now too. It's grown quite a bit.

Now, what's an edge device? I mean, it can be something as simple as a, A sensor, a vibration sensor. So something Something that generates data. [00:10:00] Right. Anything. So in the real world. Out in the real world. Yeah, exactly right. It could be a camera. I mean, you know. Yeah. And why should we care? Is it because those edge devices everyone interacts with and they don't even know they're interacting with them?

And it's extremely important to understand, you know, what data as a human living in a technology world are you interacting with and how are you interacting with those edge devices? Right? Sometimes, like I said, you don't even know you're inter interacting with the edge device. You know, let's just take the ring camera on the door, right?

The ring camera on the door. Let's just take even traveling, right? You go to a city like London. Whereas the largest number of cameras installed in the city, you don't even know that you're being watched. You don't even know that there's security measures in place. Right? Yeah. But that's just how data is being used.

That's how your data, personal data, when you interact with that device is being used. But now let's talk about the generation of it, right? Every time an Edge device is put out there, You're generating, you know, mass amounts of data, cuz that specific device doesn't just turn off as soon as you stop interacting with 'em.

I mean, just take like [00:11:00] vibration sensors on a wind turbine, for instance, right? It's consistently monitoring what's happening with that wind turbine and generating data based off of the sensors that are being used. This data itself is valuable data, right? You may say, well Jay, there's no anomalies in that data, so why do we care about it?

Well, the reason we care about, it's because what makes edge devices function properly? I. Is the continuous reform of the algorithms that sit at the edge, and that can only happen with advanced analytics. When you have to bring that large sums of data back to the core, that's where more of the hardcore, I wanna say, analysis work gets done, right, the new algorithms that may be formed, the new innovations that happen, and that will only influence new edge experiences.

Matt Trifiro: Yeah. So you know what I think is kind of interesting about the way you define that, you know, it's this place where data is increasingly being created and consumed, and the idea that that's like a novel idea to us is actually, it almost shouldn't be, right? Because we live in the real world. Like that is where all the things are.

So why that's where we are and [00:12:00] what the things are that it's actually almost a little crazier. If I came to you and said, okay, wait. Rather than like collecting and analyzing, do the data here, what am I, I'm gonna ship it to. Seattle or Redmond or wherever. Right? Yeah. And then if you wait long enough, it'll come back and it'll be useful.

Right, right. Like that sounds absolutely nonsensical. And I know the reasons for that and the economies of scale and stuff, but I think very few people realize that the internet we have today was designed primarily to deliver content to humans. Yep. And so there's all kinds of things that, you know, speeds that humans operate.

The bandwidth that human operate, the amount of data we can consume at any given time. And the fact that we tend to consume more data than we create is individuals. Now, you're right, we have lots of sensors, they're all creating lots of data, but like, I can only type so fast. Right? Right. And so that's, that's really interesting.

And so one of the ways that I've talked about it, this seems aligned with the way you think about it, is moving from a world where, It was primarily machines talking to humans or [00:13:00] humans talking to humans, to a world where it's machines, talking to machines, and yep, in that world, microseconds manner, nanoseconds manner.

So let's talk about, okay, so we got all these sensors, we're creating all this data. We have to analyze the data. How do we make decisions about where we want to analyze that data and how we want to treat that data? How should I think about that?

Jay Limbasiya: I think every use case is different, like you said, depending on what's happening with, you know, specifically

Matt Trifiro: that use, well, let's, let's take the cameras in London.

Let's just imagine, okay, I've got, I don't know, 500 cameras, a thousand cameras, and let's say some of them are even 4k, like high resolution, some maybe not. So how should I think about that? Where do I put that data? How do I analyze it? How do I collect it? Yes. What do I do? How do I think about it?

Jay Limbasiya: So the way I think about it, right, is even if you just say cameras for instance, right?

Yeah. Is. What is the value of that data itself that you're collecting, that you're using, actually? So when a camera, for instance, comes in a focus, let's just, I'm gonna assume this. You [00:14:00] walk in front of a camera, the camera doesn't understand who you are, because all the facial recognition models that have been trained, you know, there's a variability of course, in that.

And that Verdo is too high. So it says, Hey, I don't know this person. Right? I don't know Matt. I've never seen him. So there is value to collecting that specific data because it's never, it's never seen you. Right? So because it's never seen you, it doesn't know how to classify you. Mm-hmm. Right? It could classify you better.

More efficiently if it's able to make sense of who you are. Right? And of course, that's valuable data, right? When you think about security, when you think about national intelligence, you know, you those, those pieces of data, they matter. And so that's when you think about, okay, well is it worth. The cost to stream this data back to a centralized location, right?

Or stream it back to a location that is off the edge. So you can not only retrain the model, but maybe do a little bit more de you know, in depth research about what you're seeing in terms of the data that's being collected right now. There [00:15:00] is things called the feedback loop, for instance, right? So where you may have an algorithm that.

Falsely identifies a piece of information like, like an image of you, for instance, and may say, well, no, that's not Matt. That's Jay. So you want to correct that. So it takes out the false nature of what's being trained on algorithm, but there's also other things, right? I think it's not so much about, Hey, can you take what's being produced at the edge and keep it at the edge?

It's what more can you do with this data beyond the edge? You may not use it for identification of Matt. But you may be able to use it for something as simple as, well, how many individuals wear that specific shirt that bat's wearing right now? And I get a analysis of how popular that shirt is in a specific area, and then be able to sell that information back to a retail organization.

Maybe the, even the individuals that make that shirt itself, right, they may say, Hey, we've seen 80% of males in London wearing that specific shirt. Should we increase our advertising, you know, budget for specifically the uk, right? Of course. We know they're not gonna do this. This is hypothetical. They're gonna use it for national security and [00:16:00] making sure that there's behavioral analysis and things like that that go into it.

But all those advanced deep learning models need a lot more compute power. Right, and they need a lot more research, a lot more investment when it comes to advanced analytics. Yes, the edge can make sense of what's going on at the edge, but in order to make sense of something that's really beyond that, you really need more compute power.

You need more storage. And really, you can use this data maybe 10 years down the road for something else, right? You can use it for a completely different type of use case you never even thought of. And that's why you know, the data is so valuable, right? It's not so much the algorithm itself, it's. Can you capture this data and even save this data for maybe technology that hasn't even come out yet?

So

Matt Trifiro: capturing the data of a thousand 4K cameras is like petabytes. Yeah. Probably is not something you actually could afford to ship back to the central location. Well,

Jay Limbasiya: I think unless it's by train. Well, yeah, exactly, and I'm being serious. I think it's more the [00:17:00] metadata than it's the actual physical image itself.

Right? It, it's not so much the actual physical image you're capturing. Of course, you can take metadata out of that and take only bits and pieces of the data that you really, really need. Of course, there's things like, here's something I heard the other day. With the Tesla cars, right? They only stream data back to their core location.

If the cars or the device itself experiences something that has never been experienced before, maybe a brand new road sign or a brand new stop sign or. You know, a stoplight, for instance, or maybe a completely different type of sign it's never seen before. So maybe

Matt Trifiro: you have layers of storage, right? So you've got maybe sort of the original video at the edge for five years and you send the metadata to the core, and if the core wants that frame, it could maybe request it.

And so there's this like, you know, balance between them. Yeah, I can, I can see that making sense. It

Jay Limbasiya: definitely doesn't make sense to me to completely say, Hey, the data is too large at the edge, so we're just not gonna consume it. Send anything right. Right. That obviously doesn't make sense. Yeah. I think there's a [00:18:00] efficiency play here as well.

Yeah. Right. And that efficiency play is, obviously it costs money to store data, right? So let's make sure we store the right data. Right. Right. That makes sense. And also the transport, I think you mentioned it before, right? Some of these edge devices are located in areas that have almost no connectivity. I mean, 5G is great, but if you can't connect to it, what's the

Matt Trifiro: point?

Yeah, yeah, yeah, I get that. So you're fond of talking about what you call the last mile problem. Can you describe what you mean by that? And can we go into that a little bit?

Jay Limbasiya: Yeah, so the last model problem specifically related to data science is when you take a I, I like to call 'em a proof of technology, right?

When you take an algorithm that's developed, that's a proof of technology based off of a use case, let's just take predictive analytics for instance, right? You develop an algorithm with a subset of data. That data trains this model, and you're going able to get. You know, X as your output, right? Based off a back cast data that you already have.

If we look at something like energy production, right? You can do a predictive analytics model on how much energy's gonna be produced from a wind, you know, a single wind [00:19:00] turbine, and can you back cast it to see if it's accurate? Well, that's taking a subset of data to build that model, right? The last mile is really taking your refined algorithm that says, okay, well this has a accuracy of 40%, 50%, whatever, and actually.

Understanding how can it be bridged into production, right? How can you actually take the algorithm, put it into a productionized environment, and take the output of what comes out of that and build it into your operations, your business operations. So it starts influencing decisions, whether it's positive or negative, whether it's efficiency, gains, or profitability.

How can you actually start making decisions from your productionized model, and why is that hard? Well, it's hard because I think when you, when you work in an r and d environment, Everything before the productionization is all research and development. Essentially, it's a controlled environment, right? You can control where your model sits, how it's trained.

Essentially, you can control everything about it, but when you start opening it up to data, that is a larger subset, real time data, [00:20:00] right? You have to now start thinking about, well, how am I gonna collect that data? Where am I gonna collect it? How am I gonna clean it? How am I gonna refine it to a point that is consumable by this specific model I've created?

And by the way, how am I gonna monitor the health of this model? Now lastly, all this is great for a technology standpoint, but we still need to think from a business standpoint, from a use case standpoint. Right then the business invests in ai, right? It's not the other way around. IT organizations aren't investing in themselves.

It's the business itself saying, Hey, I need to become more efficient. I need to deliver a better quality product cause my competitors are double delivering a better quality product, or I need to generate more profitability because I need to reduce my o and m overhead. That's when you have to connect technology and business, right?

Which is your financials. It's all great when you're doing it in your r and d environment, but bridging the gap between your r and d environment, getting into a productionized environment, getting the foundation layer proper and accurate so you can actually host this model and host this AI in your environment, but also making sure that [00:21:00] the output makes sense to the business.

A lot of times the business may have a completely different thought of what they think is gonna come out of their use case than what a data scientist produces, or they may not even know what the boundaries are of ai. Itself. Right. And that's where I think bridging that gap comes from, right? How can you make sure that whatever you're producing, based off a use case can get into production in a timely manner, can be efficient, it can be maintained, and in the business itself, the users can make

Matt Trifiro: use of it.

Okay, so you know, I'm gonna ask you the next question, which is, okay, how do you do that?

Jay Limbasiya: I think the way you do that, you know, traditionally when I did projects for the government, it would be based off of a set of requirements. They'll say, Hey, here's an rfp. These are my requirements. I wanna make sure you answer these requirements right?

But it's different. I think the business has come a long way. When we first start talking, we said technology itself, right? When I started technology, it was a foreign concept. Internet was a foreign concept cause my family couldn't even afford it. The business itself has gotten more and more educated as technology has become [00:22:00] simpler, not the actual production of technology.

I'm talking about the consumable part of the technology, right? So a cell phone, for instance, right? My, my, my mom is the least technical person in the world, but she can tell you how an iPhone works inside out. Right. Right.

Matt Trifiro: And basically modern smartphone don't really crash. Not like, right, exactly. Like old Windows PCs.

Right, right. The blue, the blue screen of death was just like, you know, once a day.

Jay Limbasiya: Yep. Yep. And so the way you bridge the gap, because our environment is getting smarter with technology, they have their own sense of what they wanna do with it. And so you start from a use case perspective. Forget about the technology itself, right.

There's a million different options out there. But if you start from the output, what are you trying to get to? What are you trying to do from a business standpoint, from a business value standpoint, then you can work backwards to say, okay, this is the return on your investment you're looking for. Right?

Right. You're looking to save X dollars or X amount of time, which equals dollars, right? Or you're trying to make X amount of money, right? Or a completely new revenue stream, for instance. Right? And that efficiency gain is what you [00:23:00] start with. And then you work backwards and say, okay, well, What is the process, right?

What process have you actually adjusted or are you gonna adjust? And what does that mean? What does that mean from a technology standpoint, what technology's already implemented? And then if nothing's been implemented, that's even better cuz you kind of get a white surface to start on, but most of the time there's already something there.

And then when you keep working backwards, keep working backwards, that's when you get to the core and say, okay, to address this problem. Which is what ultimately you're trying to do. We don't need to boil the ocean. We need a smart way of not only gathering and collecting the right data, but we also need a smart way of being able to harness the data, clean that data, refine that data, and get it to a point where it can be consumable by technology X, whatever that technology X is.

Right, right.

Matt Trifiro: And as you said, secure and reliable and all these other things that exactly. Yeah. Yeah. That's interesting. And also you point out that I think something technologists can often overlook that a successful implantation is as much a human problem. Yep. Person management problem as [00:24:00] it is a technology problem, it's a person

Jay Limbasiya: problem because I think there's two parts, right?

I think the business thinks from the business side and technologists who are passionate about technology, rightfully so, love making it more and more complex. Right. And they make it more and more complex cuz they're so interested in it. Right,

Matt Trifiro: right. It's super cool to work with a 4K camera, but maybe you don't need all those bits, right.

Or whatever. Exactly. Yeah, I get it. Exactly. I get it. Right. I get it. Yeah. So, okay, so let's do a little exercise. Let's just oversimplify, but let's go back to London. All right. So let's say, okay, so we're running the city of London and we've got crime, we have terrorist threats, we have public safety issues.

Police support issues, fire maybe. All these things. So let's pick a use case. Let's do what you just said. Let's say we want to, I don't know, identify active shooters. Okay. Yep. So, so how would you think about that? So

Jay Limbasiya: I think the way that I would think about it is obviously we're not gonna get into every single data source that we, we, we wanna Sure, sure.

Right. I think there is a couple [00:25:00] simple pieces of data you wanna collect. Right. Obviously there's behavioral traits, there's behavioral traits that you can make sense of from historical data. From historical behavioral traits of

Matt Trifiro: humans. Right. You know, if someone's moving a certain way with a certain

Jay Limbasiya: gate.

Right. Exactly. Okay. Or maybe as simple as you know, is there cameras located in a high secure area that are thermal cameras? Right. Can you pick up weapons that are in someone's pocket? Right. Can you understand not only can you pick up weapons or can you pick up objects in someone's pocket or someone's being, but can you train an object detection model to specifically identify the pattern of what a weapon looks like?

Okay? Right? And then not only that, but can you make sure an alert goes out to the right people at the right space to say, Hey, guess what? There's a person over there. Go get 'em. Or go get her. So, so

Matt Trifiro: let's, let's imagine that we think we can develop an AI inferencing system that can detect, detect weapons.

Okay. Right. And so now we've gotta figure out how to collect all this data. So we need to network. Yeah. And, and we need a bunch of cameras or whatever kind of sensors to do this. And we need to figure out a way to, like, if the cameras have [00:26:00] software, we've, how to update them. So there's all kinds of complexities there.

Now let's just oversimplify them, pull all this data back, let's say, to someplace reasonably central in the city, right? Mm-hmm. Now, how do I decide what I do there? Versus what I do somewhere else. And what are the things, like, you mentioned a term that I don't hear very often, which is data cleansing. How should I think about that in the context of data that's coming off of a camera?

And I wanna, like, how should I, how should I think about data cleansing?

Jay Limbasiya: So I think a lot of that, you know, a lot of the data engineering work, I would say, right, which includes collection of data, data, you know, identification of the right data, cleaning the data. I, I honestly do feel that you can do a big portion of that at the edge.

You can consume it, you can quickly clean it. You can quickly digest it and be able to make sense of it. Then there's data that, there's two parts to it. There's data that is valuable, maybe can be used in a completely different type of use case. Right. I know we're talking about security here and, and specifically active shooters.

But [00:27:00] maybe you can use the data for, you know, counting pedestrians, uh, pedestrian, right. Whatever. Sure. Right, sure. So that data itself, you know, you may not need the actual raw data itself. You may just need the metadata out of that to say, this is the data I wanna, I wanna stream back to the core. Right? A different location.

But then there's a, there's another portion of this, which is, What's the value of the data that you're collecting? How can these pieces of data enhance the algorithm at edge? Right? Because even though you deploy something at the edge in order to refine it, For future trends, maybe four years down the road or three years down the road, you still need data to be able to kind of do that r and d work.

Yeah. So let's

Matt Trifiro: just ambiguity some of this stuff cuz you talk about doing things at the edge. Mm-hmm. Right. And I think it's not clear to me whether you mean like literally on the camera or five feet from the camera or someplace in the middle of the city, or 10 places in the middle of the city or someplace.

500 miles away or 10,000 miles away. So when you talk about doing data analysis at the edge, [00:28:00] separate from like r and d and furthering stuff, what are you talking about? Where is that happening and what are those kind of knobs that I would turn on that.

Jay Limbasiya: Well, once again, it depends on the edge device.

Right? Let's take your phone, for instance. You can do a lot with your phone, but if you take something like cameras, for instance, cameras, yeah, right. Cameras itself. Yeah. I definitely think with how small edge GPUs have become, for instance, I definitely think you can absolutely do most of that work on the camera itself.

Right? But the problem is, is that can you identify something that's gonna take a longer period of time? Quickly enough, or it's maybe something that you're not able, not capable of doing at the edge. Cause it requires more compute power or more, you know, more X, whatever that technology is, and stream it back to maybe something that's five feet away.

Right. And that five feet away will do kind of the, the, yeah. I mean

Matt Trifiro: maybe I have a street side cabinet every two blocks that has a rack of GPUs in it. Yeah. That you know Exactly. I've got sub one millisecond latency and maybe that's enough when a detect a human shooter. So let's, let's say that I have that, would you call that edge?[00:29:00]

I

Jay Limbasiya: think I

Matt Trifiro: would call that edge. Okay. So I'm doing kind of infrastructure edge or the right, you'll have a thousand different names for it, but Exactly. Yeah. There's the on premises edge or on the on device edge or the on the street pole edge. And then there's the, like, there's like near edge and four edge, essentially.

Yeah. Well, everybody, nobody, nobody, nobody agrees on whether near is nearer to the core or nearer to the edge. Right. That's why I stopped asking Define the edge, cuz like everybody has a different definition. Okay. And then I've got my research scientists, as you said, some maybe experimental workloads that take longer to do.

Like, I'm trying to improve the ability to see through mm-hmm. Weapons through clothing or some Right. You know, whatever the, the infe thing is. Okay. So this is one of the things that my listenership may not be entirely clear on. The language that I often hear is we will train the models on the core or somewhere else, or in the r and d labs, right?

And then we'll take those models and we'll push them down to the edge, whether that means the device or the Streetside cabinet or something else. What are those two things for People don't think they may be understand, but maybe don't. Why do I have these two things and why should I care? And what's the [00:30:00] compute differences and why are they different?

Jay Limbasiya: Well, when you train a model, it requires not only a. A tremendous amount of data. You need to have a lot more data when you train a model to get it to a point where you can refine and refine it, refine it, and you get it to a point where you know the output is acceptable. The reason I say that is because there is no model in the world that is a hundred percent accurate.

There's always going to be a percentage of error, but are you comfortable with that percentage of error is where Right. And do you

Matt Trifiro: have an escalation path to

Jay Limbasiya: like Correct. For the error? Exactly. Or do you have a feedback, like I said, right. I think I mentioned the feedback loop. The feedback loop will, you know,

Matt Trifiro: or even a human in the loop.

Right,

Jay Limbasiya: right. Or keep. Right, exactly. And so yeah, like is a gun,

Matt Trifiro: no, that's a

Jay Limbasiya: broomstick. Right? And so when you train the model at a core location, it's more controlled, it's controlled environment. It's kind of like a sandbox environment, right? Where. You are building it out in an environment that if it were to go down the path that you don't want it to go down, you can quickly bring it back and make sure it's not hurting anybody essentially.

Right. But when you actually deploy the edge, that's when things [00:31:00] get real. That's when you're starting to take action with the output of that model, and that's why it's so important to have that core focus, that core r and d effort, I guess you can call it, right, where you are consistently retraining and developing new models because a model can degrade, of course, right?

It's not going to be perfect. For years and years and years and years. If you don't monitor it, if you don't monitor the health of it, you don't monitor the output of it or you don't, you don't make sure that there's new data that may be coming. You're adding new data that can refine that model that wasn't there previously.

For instance, you're gonna have kind of stale outputs. It can become very dangerous. Technology is only as good as we make it. When you start deploying some of these models, we're talking about camera models right now. Right? But when you start talking about autonomous driving, for instance, sure. Right. Yeah.

Yeah. I mean, sure. Your, your listeners know, you know, it can become extremely dangerous and life-threatening and cause severe harm if you have a model that maybe thinks a, uh, wall is not a wall there and keeps driving.

Matt Trifiro: Right. Even though human drivers do

Jay Limbasiya: that all the time. Yeah, [00:32:00]

Matt Trifiro: exactly right. Exactly right.

Yeah. There's some interesting ethical decisions we're gonna have to make about how much we allow mistakes to be made by AI and who holds life doesn't mean more difficult problem than the AI itself. So in general, is it safe to assume that the training of a model is something that can be done, quote unquote offline?

Yeah, batch at human speed using close to infinite resources. Right. And then I package that up in a trained model, which I presume is a lot smaller. And you said maybe could run on an edge GPU that's in a camera, right? I'm, I'm just trying to get a sense of order and magnitude. So is the training system, let's say for cameras, is it like, 1, 2, 3, 10 orders of magnitude bigger than the model.

What's the relationship between, I know this is a very general

Jay Limbasiya: question. Yeah. This is, this is, but

Matt Trifiro: is it like, is it like to do something like visual identification? I mean, I imagine the training corpus might be absolutely massive.

Jay Limbasiya: Essentially, if you talk to a data scientist, they're gonna say, the more data I have, the [00:33:00] better.

Okay. Right.

Matt Trifiro: And if you, if you use, the more good data I have is the

Jay Limbasiya: better, more good. Yeah. Exactly. Exactly. So if you take like facial recognition for instance, right? Yeah. It's come a long way since I've worked on it, of course. But you know, when I did it back then, it was, you needed 30,000 pieces of data, right?

Of images, facial images, to even get it to a point where you're looking at like a 30% accuracy, but obviously really 30%. You know, that was way before, I mean, that was years before as GPUs have evolved, right? Technology has evolved. The models themselves, most of the models that you work on now are open source models that you're refining, right?

So I would like to think that the scalability is not that drastic anymore, but I still think. Core is going to be at least two to five x more than what you're gonna be doing at the edge. Right. Okay. I mean, yeah, maybe order some magnitude more, but Exactly. Exactly. Right. I mean, and it makes sense, right?

Because at the core, that's when you are not necessarily looking at making an efficient model yet it's not a productionized model, which is what you're gonna push to an edge, right? Right. You're just trying to make sure the output is correct.

Matt Trifiro: Right. [00:34:00] Well, and as you pointed out, I may not even know. What I'm looking for yet, meaning I may have this vibration data from 10,000 wind turbines over the last two years, and who knows if there's any patterns that I can detect that would predict maintenance.

Right? So I've gotta go through it and look for correlations and anomalies and try to build a model and test it. And yeah, I mean that could be just a massive amount of data that I don't know which is useful or not until

Jay Limbasiya: I've fully gone through it. Well, and you also may identify the core is actually where you identify.

Is there data I'm missing? Like has, is there something I haven't even collected? Mm. Right. You may have a hundred petabytes of data. You know, let's say this, you may have a hundred petabytes of data for wind analysis and all these sensors on wind turbine, but you may come in there and say, man, I'm missing weather data.

Right,

Matt Trifiro: or just the angle of the camera,

Jay Limbasiya: or Yeah. Or that, yeah. How high, how high is the

Matt Trifiro: camera off the ground and what's its angle? Right. Because I can

Jay Limbasiya: now, and here's, here's the number one thing with edge devices, right? They're at the edge. So you're not gonna get a consistent pattern of data that is reliable all the time.

You may have issues with connectivity. You may have issues with [00:35:00] the Edge device itself, right? Devices are, they're not a hundred percent fault proof. You may miss some gaps in your collection of data. There's a lot of things that can go wrong when you're collecting data, but that doesn't mean that you're not going to make use of the data.

It doesn't mean that the data's not valuable. It just means that it's kinda like a puzzle. You need to take bits and pieces of different groups of data and, and make sense of it together. Okay, and this isn't

Matt Trifiro: really about the edge, but I'm, since I have an expert, I'm gonna ask you to disambiguate this for me.

So lots of ways of storing data, the three phrases I hear all the time. Data warehouse. Data lake. Data Lake house. Yep. I think those are all silly terms cuz I don't know what they mean. I'm being facetious cause I actually do know what they mean. But I think it is, it is sort of a, you know, terms of art that like, you know, only certain people, it's like code.

So how should I think about these models of storing and accessing data and how should I relate that to the edge? I think that's the question I'm trying to ask for my audience.

Jay Limbasiya: Yeah. So. When you think about data warehouse, data lakes and data lakehouse, right? What's the [00:36:00] difference? The main difference is, is it's the evolution of a data lakehouse.

It's the latest terminology. Okay, so, so these

Matt Trifiro: are on

Jay Limbasiya: a sequence, right? Exactly. It's evolved into a combination of a data warehouse and a data lake, which is what we call a data lakehouse. Okay. And the data lakehouse itself is interesting because, Think about all the off-takers of data. You're not just talking about from a business standpoint, not just talking.

What is, what is an offtaker? A consumer offtaker would be users, right? And individuals that will consume that data. The data scientists that, yeah. Okay. Got it. Right. Data scientists, you'll have compliance individuals that'll be making sure that, you know, the data is refined and security. So a data lakehouse is, I like to think of it as like the best of both worlds.

You have an environment that you can collect and centralize enormous amounts of data. That may be collected in multiple different data silos, but you're refining and getting it to a point where you can put it into a data lakehouse, which is consumable by, say, a data scientist. You're giving them access to the data they need without giving 'em access to kind of the ocean itself.

That's important, right? It's important because a data [00:37:00] scientist is not in the business of cleaning your data. They're actually not even in the business of identifying the data they need. They're in the business of producing models and taking data that's already refined and ready to use. And making sense of it by saying, okay, if I'm using a facial recognition model, here's the data set that I've already been given.

Here's a subset of that data. I don't have to worry about refining it and getting it to a point that's consumable. I'm going to take it and make my model. I don't

Matt Trifiro: think any data scientist I know has ever said that they complain about

Jay Limbasiya: is how much time that spend. Exactly. Exactly. And that's where a strong data lakehouse foundation.

Kinda eliminates that excess time that a data scientist has to, let's dig into that a little

Matt Trifiro: bit. Okay, great. So a daily lakehouse, it's a place that I can just store lots of different data. Right. So what does that mean? Let's imagine that I've got factories, let's shift to factories. In factories. I've got all the machines that are hooked up to it and they have status data that comes out of it.

I've got cameras there that I've got aimed at the production line that I can maybe do some AI on. I've got a bunch of the data that I'm collecting, [00:38:00] and you're saying that these may locally be collected in their own like specialized systems. Right. And then in order to make it more accessible, so I don't have to force my data scientists with all these different systems, I wanna store it somewhere.

Centrally, as you mentioned, right, right. Whether that's literally in the center of the. State, but it's a place where all these things, what is inside of data Lakehouse? I mean, it's a bunch of hard drives and is there a schema? Well, it's different than just storing in a giant S3 bucket. How's that? Not a data

Jay Limbasiya: lakehouse.

Well, when I think about data lakehouse, it's consistent of not just storage, of course. Okay. Right. S3 is just storage, but it also makes use of. A compute layer on top of that where, where you have your more advanced, you know, GPUs that are located on top of that. And then on top of that you also have your ML lops layer.

There is software on that's defined on top of the hardware that will allow you to not. I see. So,

Matt Trifiro: so data lakehouse is both the modern technology to retrieve and analyze data as well as the data itself in

Jay Limbasiya: one box? Exactly. Right. Okay. Well, and it's interesting, right? A lot of individuals say, well, [00:39:00] Hey, I have data that's gonna be siloed still.

I can't bring it to a data lakehouse, right? Yeah. Yeah. Why? There could be security issues, compliance issues, you know, financial data, for instance. You may not wanna combine that with you like Right. Or like I've heard with

Matt Trifiro: the public safety thing sometimes that can't leave the jurisdiction. Exactly.

Jay Limbasiya: Video can't.

Or something like jurisdiction. Yeah. That's why you gotta be a little creative right now. Think about from a, a cost perspective, right? When you have multiple different data silos, that means you have multiple different budgets that are involved in maintaining those different data silos. You have to secure all of those different data silos.

You have to make sure that it's accessible, it's consumable, all those different things that you have to do with the data lakehouse, but now you're multiplying it by the amount of data silos that you have. Now when you try to make a more efficient environment, yeah, you can consolidate maybe three out of the five different data buckets that you have.

A, you're eliminating the need to maintain five different buckets. I wanna say you reduce your o and m costs cuz you're able to manage three buckets now instead of five. But now you'll say, well Jay, I still have this data that I can't bring into this environment. That's [00:40:00] okay. There's always going to be data that you say, look, it just doesn't make sense to bring into a data lakehouse.

We can't do it. There's issues, security issues, regulations, so on and so forth. But we still wanna make use of that data. And that's when you know these new technologies that have come out. Data virtualization, for instance. Right? Can you access. Data to be able to make sense of it. Right,

Matt Trifiro: right. It's almost like what's more important is the metaphor of the data lakehouse, right?

Yeah. Than actually the physical centralized. Yeah, exactly. If you can, if you can virtualize a silo and make it look like it's coming from the data lakehouse, it's, it's essentially the same thing. That's super interesting. Super interesting. Let's shift to the future. Let's look forward a little. What's most exciting to you about what's happening at the edge?

Jay Limbasiya: I think what's most exciting to me at what's happening at the edge is the quality of life that's gonna improve. So what I mean by that is think about all the edge devices. A lot of times individuals when they think about Edge, I know we talked about cameras here. That's like. Probably one of the most common things that individuals think about when they're like, oh, edge camera, phone, so on and so forth.

Right? Because they interact with 'em daily. But really what I'm talking [00:41:00] about is, you know, think about how edge can be applied in like agriculture, for instance, right? In farming. Yeah, right.

Matt Trifiro: I just interviewed a guy who run the An indoor, greenhouse based autonomous farm. Yeah, exactly.

Jay Limbasiya: I mean, it's fascinating, right?

Yeah. Can you produce a product that is more nutritious and picked at the right time to get the optimal output out of it? I think it's critically important because, Technology itself can be used for good, but it can use be used for bad as well. Mm-hmm. And this is where I'm extremely excited to say, okay, the good part of technology is, can I make my quality of life better?

Can I make my health better? Can we live longer? Can we figure out a way that we can solve certain illnesses and individuals, right? Those are all really common things that excite me. Essentially, what's gonna happen in

Matt Trifiro: the future. Yeah, I get that there's already improvements in your life based on, you know,

Jay Limbasiya: things that are, or even driving, right.

Like my parents are getting to an age where I'm getting concerned about them, you know, them driving essentially a death machine on wheels. Yeah. Well,

Matt Trifiro: and and even if they're not fully autonomous, they're making cars now that are a lot harder to crash into something or [00:42:00] leave a lane. Right, exactly. So that's, yeah, that's really interesting.

And I imagine it'll be an acceleration, right? Like we're just on the verge of this. Right. Exactly. Yeah, that's really interesting. Are there things that you would, you would wanna accelerate? Meaning, let me ask this a different way. You can see this Imagine future and you can see where we all are today.

And if you could say, gosh, if just this group of people do these things and this thing and this thing, like, you know, if you can imagine the dominoes falling, like which ones would you nudge if you could, like what would help accelerate this world?

Jay Limbasiya: It's a very personal question when it comes to this. This.

So I'll say, this is my a hundred percent my opinion, right? Okay. I would nudge healthcare, the healthcare industry, I. And the energy industry. Right. Okay. The two largest old school industries in the world that have the most amount of data that's been stored for years and years and years. I feel like we haven't gotten to a point where we've identified a way to go completely clean energy, for instance, right, for the environment.

Yeah. That has tremendous impact to the healthcare industry, has tremendous [00:43:00] impact to just the quality of life, for instance, and then healthcare itself. There's number of different parts in healthcare that I think can be accelerated, but even something as like the smart hospital, right? We went through Covid recently and, and that covid outbreak, it really.

Helped us understand how far behind our hospital systems are. We can't monitor our patient's health sometimes in hospitals that aren't connected because there's not technology implemented that's correctly implemented there since, you know, maybe 15 years ago or 20 years ago. That has been impacting everyone's life, right?

Covid was something that impacted everybody and I think it is. I think that's exactly where there's been a lot of focus recently is see how far we can bring the energy industry and nudge that industry as a whole and also figure out a way to nudge our healthcare industry to become more efficient.

Matt Trifiro: You know what I love about that, that answer, cuz usually when I ask a question like that, the answer I get is, well, I, you know, I'd accelerate the deregulation or, but what's interesting about your question is you're starting from the demand side.

Yeah, which I think makes a lot of sense because that is how you tend to get things to happen more quickly. It's

Jay Limbasiya: funny that you're not the only [00:44:00] individual who's asked me this question, and I get very similar answers when it comes to like, like you said, deregulation or it comes to very politicized answers.

Right? Yeah. My life motto has been, how can I do what I know and help humanity as a whole? Doesn't matter where you fall politically or what you believe in, it's how can I make sure that whatever I do, whatever I output in this world, can have a positive effect in everyone's life. Whether it's an easier access to healthcare, whether it's, you know, a better quality of life through a more efficient energy production.

It's not so much about what's specifically affecting me right here, right now. It's what's in my environment and how can I make an impact in my environment.

Matt Trifiro: Where do you stand on this? This very current debate of how worried we should be about ai, right? I mean, clearly it's interesting and fun and it's making better products and like, and I just gonna stick my head in the sand cuz like I'm using it and it's great and like I don't see any danger.

But a lot of smart people that know more about this. I mean, so where do you stand on that

Jay Limbasiya: spectrum? I think ethics is a very interesting topic when it [00:45:00] comes to ai. And I say that because I have started my career in our intelligence community where Right. My goal is to protect my loved ones in the country.

Right? Yeah.

Matt Trifiro: And And maybe push up against ethical boundaries

Jay Limbasiya: in order to do it right. Exactly. And I will say that technology and AI can be used for good and it can be used for bad. Just like the internet. Actually, internet is the same concept. When the internet came out, people had the same questions, right?

AI's in the same boat, I think. I think AI is something that if you choose to use it for good, then it will be good. But if you choose to deliberately use it for bad, then it will be bad. I think that goes for anything, right? Whether you use a car or whether you use, you know, whatever in your current environment.

It's an active choice of you as an individual and how you choose to use it. Isn't there some

Matt Trifiro: of this debate though, that look ai, one of the things that's interesting about it is it's kind of non-deterministic. Like we know how these large language models quote unquote work because we wrote them, but we [00:46:00] don't know actually what they're kind of produce.

Right? Right. It's like we don't know what they're gonna do and sometimes they're gonna do things that look evil, even though you didn't intend that. So it's like, you know, unlike typical. Applications we build, which are fairly deterministic. This is a different world we're relying on, you know, it inferred correctly 20

Jay Limbasiya: times.

Yeah. Yeah. So I've heard that too, right? I've gotten the same question about the large legacy. Yeah. And I don't know the answer. I'm and, and so here's my thought process, right? Yeah. Say I owned a autonomous farm, right. Has similar to like Tesla or whatever. Mm-hmm. It comes with these features that say, okay, well there's auto drive involved in it and it will automatically drive for you.

Me as an individual, I get to determine how much trust I choose to put into that technology. When I get behind the wheel and I say I wanna turn on auto drive, do I completely put my feet up, take a nap, read a book and say, Hey, this, everything's gonna be okay. That's my personal choice. I choose to trust that technology, right?

Same thing with goes with a large anchors model. It's a [00:47:00] tool. In our tool bag as humans. I guess a, a nail gun could go crazy too. Exactly. Right. People cut their arms off with chainsaws all the time. Right. And, and I think a lot of that's educating our environment, educating our kids, educating our, our youth to understand, hey, this is very useful in gaining knowledge.

Right? But it doesn't mean that it's. A hundred percent accurate. I

Matt Trifiro: wish someone younger in my life when I was younger in my life, had taught me the law of unintended

Jay Limbasiya: consequences. Exactly. Exactly. Right, right.

Matt Trifiro: I would, I would've, I would've gotten, gotten farther ahead faster if someone had, let me explain to something as much as you think

Jay Limbasiya: about this.

Yep. And I mean, I'll be honest, I have two young nieces and they're growing up in a world that is going to be completely different than what my sister and I grew up in. Yeah. And that's one of the things I worry about is not so much are they gonna interact with the technology, it's more of are they going to understand.

The limitations of the technology? Mm-hmm. Are they going to understand how to make use of it? Where to make use of it and when to say, Uhuh, this isn't right. I'm not listening to this. I think that line has been. Kind of fuzzed out a [00:48:00] little bit as technology evolved, right? I mean, think about amount of kids with cell phones.

I didn't get a cell phone until I was in college and even then it was like when we had to pay for minutes, my parents were like, Hey, you got no. And their behaviors

Matt Trifiro: are different. I have two young boys and when I call 'em on their cell phone, which I rarely do cuz I never use the phone AppD, but when I have to call them because like their Alexa's not working and they can need to come to dinner, they don't say hello.

That's just what that age group does. Cuz they're like on, it's like my phone is just like discord. I don't say hello like, you know, I'm just

like,

Jay Limbasiya: what? And you know what's crazy is coming from the intelligence side, it's so critical to teach our youth the boundaries of technology, right? There is an ethical commitment that I think that society has a whole to educate individuals about the technology itself.

It doesn't mean that we should limit the technology. It means we should get educated in the technology and understand how to properly use it. Yeah. And I think that's really where I stand. That's

Matt Trifiro: awesome. That's awesome. Hey Jay, thank you so much for coming on the show. Yeah, this has been super interesting.

Really enjoyed this Panoptic conversation, and we'll hope to

Jay Limbasiya: see you more at the edge. Absolutely. Thank you for having me. This has been [00:49:00] fantastic. That does

Narrator 2: it for this episode of Over the Edge. If you're enjoying the show, please leave a rating and a review and tell a friend. Over the Edge is made possible through the gens sponsorship of our partners at Dell Technologies.

Simplify your edge so you can generate more value. Learn more by visiting dell.com.