Over The Edge

Deploying Visual AI at the Edge with Sridhar Sudarsan, CTO at SparkCognition

Episode Summary

Visual AI at the edge is becoming increasingly ubiquitous, even reaching some traditionally lower tech sectors like gas station convenience stores. In this conversation, Bill sits down with Sridhar Sudarsan, the CTO at SparkCognition, to discuss their visual AI products and how they have been deployed at scale throughout gas stations and how they are using visual AI to enhance school safety.

Episode Notes

Visual AI at the edge is becoming increasingly ubiquitous, even reaching some traditionally lower tech sectors like gas station convenience stores. In this conversation, Bill sits down with Sridhar Sudarsan, the CTO at SparkCognition, to discuss their visual AI products and how they have been deployed at scale throughout gas stations and how they are using visual AI to enhance school safety. They dive into ethical considerations of visual AI at the edge and how SparkCognition thinks about operations in addition to model training.

---------

Key Quotes:

“These AI systems are really a tool that helps the human, who then helps the AI systems, who then helps the human. And we're constantly pushing the barrier to raise the bar.”

“From an industry perspective, Murphy's Law continues to apply and I think we continue to find hardware getting less and less expensive, more processing happening at lower costs.”

--------

Show Timestamps:

(01:18) How did Sridhar get started in technology? 

(02:16) Sridhar’s time at IBM

(10:45) Visual AI use cases at gas stations 

(18:13) Using existing camera infrastructure 

(20:43) Developing an economic model to deploy edge AI

(25:11) ROI calculator 

(29:02) Privacy and visual AI in schools 

(34:32) Training ethical AI 

(40:46) Sridhar’s favorite deployments 

--------

Sponsor:

Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com/edge for more information or click on the link in the show notes.

--------

Credits:

Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko and Eric Platenyk.

--------

Links:

Follow Bill on LinkedIn

Connect with Sridhar Sudarsan on LinkedIn

Episode Transcription

Narrator 1: [00:00:00] Hello, and welcome to Over the Edge. This episode features an interview between Bill Pfeiffer and Sridhar Sudarsan, the CTO at Spark Cognition, a company that offers a variety of edge AI solutions across industries. In this conversation, Bill and Sridhar dive into the massive visual AI deployments that Spark Cognition has coordinated in gas stations and convenience stores.

as well as how they're using edge technology to enhance school safety. But before we get into it, here's a brief word from our sponsors.

Narrator 2: Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi cloud environment, we're here to help you simplify your edge so that you can generate more value.

Learn more by visiting dell. com slash edge for more information or click on the link.

Narrator 1: And now, please enjoy [00:01:00] this interview between Bill Pfeiffer and Sridhar Sudarsan, CTO at Spark Cognition.

Bill Pfeiffer: So Sridhar, welcome to the podcast. Thanks so much for coming on. You've got a fascinating background that I can't wait to dig into a little bit and talk through some of your history.

You've done some amazing stuff. So just by way of background, how did you get into the technology world?

Sridhar Sudarsan: So first of all, Bill, thank you very much for having me on the podcast. I really appreciate it. So, you know, I, I started in technology. I think that was the only thing that I wanted to ever do since the time I was in grade school and I did my first programming back in, in basic back in the eighties, like many others.

But you know, once I, once I did that and I saw the results of my first program, I think I was hooked. And since then I only wanted to build things with technology, with computers, with programs, and that's kind of where I started. I was a computer science major in college and my first role was [00:02:00] building a distributed system at IBM, which is where I started and had a long career.

It was also cutting edge technologies, which is where I was very fortunate to be in from the very early days of when I started in the late 90s.

Bill Pfeiffer: That's very cool. Just really, I mean, to start out in IBM and manage to stay there for a good long time, you must have seen some amazing stuff in your time there.

Sridhar Sudarsan: I did, actually. I was, as I said, very fortunate to be in the world of enterprise distributed computing back in the mid 90s. technology called CORBA, which is all about middleware and a little bit of a precursor to IBM launching the whole e business platform in the late 90s. Kind of seeing that from early days into growing into a, you know, multi billion dollar sort of opportunity for IBM and helping shape that initially as a developer, as an architect, as a technology leader, thought leader, and so on.

And then also sort of seeing how IBM takes that forward was [00:03:00] fantastic through to. You know, the mid 2000s, when I switched to my second sort of, I would call it entrepreneurship role, where it was about taking, you know, batch processing, which was continuing to become less and less important or relevant from a skills perspective, but still was required from an efficiency processing perspective and thinking about how you do.

24 7 processing. It's not that long ago, right? I mean, if you think about, you know, now we can't think about anything that is not 24 7 processing, but there was a time even in the early 2000s when banks would shut down processing at night and things would sort of reconcile and then would open up for online.

transaction processing the following day. It was fascinating because, you know, about a year prior to Steve Jobs announcing the App Store and apps being available everywhere, this was something that, you know, I was advocating and we were talking about. And then right after that happened, you know, the world sort [00:04:00] of completely changed again and it just became the norm.

Along with that comes massive scale, massive efficiencies, so it's not like batches going away, it's just how do you process things at all times in a very efficient way. And then my third entrepreneurship at IBM was, again, being fortunate to be at the right place at the right time. I was one of the original architects of the IBM Watson platform, which was IBM's AI platform about 14 or so years ago, right after we had our research team at IBM had won the Jeopardy contest, which was a great grand challenge and a Big turning point for the entire industry from an AI perspective.

And I think it really in some ways was the beginnings of AI becoming more and more mainstream in the enterprise world. So from that to sort of being the chief technology officer there and creating an entire ecosystem of. Partners and players building applications on Watson, building the Watson platform.

It's also a fascinating journey in like about seven years, [00:05:00] starting by when we ran the Jeopardy contest, you know, there were 2, 800 cores running, you know, on refrigerator sized units in a very large sort of place because it was run in a, one of the conditions was that you shouldn't be connected to the internet.

And then one of the last things when I left IBM was running the entire Watson Processing for images and text on an iPhone completely in a disconnected mode. So there's a great journey in a very, very rapid period of time. Fascinating stuff. So that's kind of when I switched and decided to sort of use my entrepreneurial skills and come into the outside world.

And that's when I joined Spark Cognition about six years ago or so. Spark Cognition had been around for a few years before that. We started in, Amir Hossein started it in about 2013, I believe, and I've known him since then as a friend of the company from the outside, there were a small pool of players at the time working on AI based technologies.[00:06:00]

So what we do is we bring AI solutions into the industrial world. I think that's kind of the core mission and the core focus that we have here at Spark Cognition. But that's kind of in a very nutshell, in a, in a nutshell, my journey. at IBM and at Spark Cognition. So continue to work with cutting edge technologies.

I've just been fortunate to have been in that space and continue to be in that space. and

Bill Pfeiffer: building generative AI before it was generative AI. You've lived through history. That's, that's really just crazy stuff. To have been through that and look back and think, I did that, yeah.

Sridhar Sudarsan: It's always, it's always fortunate and humbling.

And I'm always thankful for, you know, for all the opportunities. There's a lot of things that need to happen for you to be at the, at the place and then continue on that journey. And I've, I'd consider myself definitely lucky and thankful for

Bill Pfeiffer: that. That's fantastic. I love it. So switching from the IBM Watson type, big, centralized, generative AI down to SparkCognition's tagline, [00:07:00] AI perfected for business, right?

So it has to be smaller, lighter, more practical, more focused, going from more of, you know, ask me any sort of question and I'll give you a human... Human ish answer, and it'll sound like you're dealing with a person down to something that's very tactically applicable. What drove that change in, in size going for, from big centralized to smaller distributed AI?

Sridhar Sudarsan: Yeah, so I think from an industry perspective, Murphy's Law continues to apply, and I think we continue to find hardware getting less and less expensive, more processing happening at lower costs. The science of algorithms and the advancement and the changes. Being able to optimize those for the smaller form factors or the continuation of smaller form factors and hardware has been a big factor of driving that, right?

And I think innovation is great [00:08:00] and grand challenges are fantastic in terms of opening up people's minds of what is the art of the possible. But when you're trying to bring those to scale in a variety of walks of life, right, whether it is healthcare, finance, manufacturing, aerospace, defense, et cetera, et cetera, well, defense may be less of, you know, scale from a price point standpoint, but when you're trying to bring it at scale in all of these different industries, One of the big factors is the ability to, for that industry to afford it at large scale, to deploy it, to start seeing value.

So I think that's a big driver and is a continuous driver in terms of how the science of it sort of becomes better. Every day now, every hour, so to speak, the pace, rate of change is continuously accelerating, which is phenomenal. The price of hardware coming down and the hardware form factors being more and more attuned towards [00:09:00] the processing of these algorithms.

So that's sort of a second factor. And the third factor is the human machine interaction, which is the interfaces, so to speak, and that continues to sort of evolve. There's more Openness in working with and trusting with responses coming from systems, from AI, where AI is not just always treated as a black box, but there's some degree of understanding.

And I think being able to, to sort of get that kind of a mind share in the general population, whether it is technicians at a factory, or whether it is store managers in a retail location, or whether it is, Security superintendents in security operations at a building, at a school, etc. I think it's being able to have that understanding where these AI systems are really a tool that help the human.

who then help the AI systems, [00:10:00] who then help the human, and we're constantly pushing the barrier to raise the bar.

Bill Pfeiffer: Yep. And as you've said, the art of the possible just keeps moving forward faster and faster. Now, I know at SparkCognition, you've worked across a number of industries. You called out a couple of them, manufacturing, retail, schools, school safety.

That's an interesting one. And one of the ones that, that I came across was gas stations. I think that's, That's an interesting example, right? I, I worked in gas stations through high school. I don't tend to think of them as being high tech, you know, like we're going to put some AI in there and do some fascinating things that seems to epitomize AI is really coming farther down the stack and just becoming sort of everywhere.

What sort of. use cases do you find in gas stations and how do you make those addressable at scale?

Sridhar Sudarsan: Yeah, so at Spark Cognition we offer a variety of different types of products, so to speak, that encapsulate [00:11:00] the AI engines and the AI complexities and offer direct solutions for The end user, so to speak, in a very simple, consumable way, whether it is, you know, time series data coming from machines or logs, whether it is from unstructured data coming from manuals or incident reports or tables or research reports, etc.

And then a third one is around computer vision data, so video data. And so at gas stations, one of the first areas that we've sort of brought into is That computer vision product, we call it Visual AI Advisor or VIA for short. So if you hear me saying VIA, that's probably what it is. It's not a routing short form, but it is a Visual AI Advisor.

So what we have found is if I step back from gas stations for a second, if I, if I just talk about what we're doing with VIA, I think if you look at the world around us, cameras are everywhere. They're ubiquitous, right? And probably for the last 30, [00:12:00] 40 years or so, it's almost unthinkable to have a building without cameras.

Whether it's a retail store, a gas station, whether it's a bank, whether it's a school building, whether it's an office building, etc, etc, right? And these days even at homes. But, you know, given the fact that there are probably likely around a billion, two billion plus cameras in the world, if you think about what these cameras are doing, They're very passively recording information.

What does that mean? It means that they're recording information, their eyes on the ground or eyes on the air or eyes on the location at all times. But it's just, there's no, it's almost like we're seeing things and we're not, it doesn't go to the brain. That's really what is happening with these cameras, right?

So they're. In some ways, so to speak, dumb from that perspective. So what are we doing? We're converting these, our whole objective and our whole mission is to convert these passive cameras into active sensors. [00:13:00] As in, if you add a brain to that, all the things that the camera is seeing. Just like we react to situations and we sort of look at things and we process things in certain ways, that's what the AI models that we have that we deploy live streamed to these live streamed videos, live feeds from the video cameras.

And what it does then is based on the business or the enterprise or the location where those cameras are, we draw from, you know, about 150 odd use cases that we have. And then the ability in our platform to add in any number of additional use cases and process these. And then we alert the appropriate people that are responsible for certain things.

So a safety manager is responsible for safety of that location. A security person is responsible for the security of that location. An operations manager is responsible for the operations, a customer [00:14:00] service manager is responsible for making sure that the customers are getting serviced properly, and the food service manager is responsible for making sure that the food is prepared properly in a safe environment, in a healthy environment, in a timely environment, and so on.

So as you can imagine, at a gas station. All of these are roles that exist. Now, they may be in a smaller location being played by one or more, one or two or three people, but as you now start looking at a simple, like it's the smallest sort of, not smallest, but it is a small sort of location, um, It's around the corner, at least in the U.

S., there's about 152 odd thousand gas stations with an attached convenience store in most cases alongside that. And as you can imagine, why do we go to the store as consumers? We go there for something that we want to get very quickly, either on a morning breakfast pickup or a coffee or because we all go for gas.

or charging your vehicle depending on the choice of sort of most of these are gas stations so diesel or [00:15:00] petrol and alongside that we stop into a store and we pick up certain things and we leave right so the thing that you get there is the speed of service the quickness and sometimes you know just the familiarity with the people so even within these stores what you would find is a typical typically there's cameras both inside the store as well as looking at the forecourt and so What our AI models do, what our Visual AI Advisor models that are running there do, is they measure things around productivity of the employees, around customer service.

How long is the queue? Like, for example, if there's a queue that is six people deep. Very likely a person coming in will look at that and say that in their head compute the fact that, you know, it's going to take a long time, I'm just going to leave and I'm going to get my coffee from somewhere else. Or, you know, I don't have the time and I think we're all kind of seeing that.

So can we detect that and avoid that? So what we do is, as an example, we measure the predicted wait time for the last [00:16:00] person in the queue and we notify, you know, an employee who might be in the back sorting their inventory, for example, and tell them that, hey, it's time to go open up a new cash register.

So proactively avoiding that, or for example, from a labor management, if you think about restroom cleaning, if we know how many people are going in and out of the restroom, today, if you look at most traditional ways of how restrooms get cleaned, they're on the clock, right? Every hour, every 45 minutes, every two hours, etc.

Let's say if in an hour, only two people have used the restroom, perhaps you don't need to do that. So converting the clock based, time based sort of schedule into something that is need based and you could on, on the other hand, you could have 10 people use the restroom in like 40 minutes and you may want to send an, you know, and we do send an alert saying, Hey, you may want to check the restroom and do a quick.

Check to see if it's clean, so you're not waiting for a time. Because one of the things that is a very well known fact is clean restrooms are very, very important for convenience stores, right? Which are attached to the gas stations. [00:17:00] So things like that, and then spills in the, in the store, product placements, being able to sort of highlight when there is a.

A hotbox where you might be, I'm from, we're in Austin, Texas, and so tacos are a big breakfast item here. And so if, let's say you have a hotbox and you don't have enough tacos, you don't want the person coming in to potentially leave without purchasing tacos, which is your, one of your highest margin products that you're trying to sell.

So all of these are. Alerts that get sent and notified to either the store manager or the employees. So they're acting on it immediately and preventing and avoiding a situation that could be, you know, as I said, across efficiency, lost customer, product not being there. Or even security, right? Somebody coming in with a weapon or somebody, you know, there's a spill there and you want to notify someone so that you don't have somebody slipping and tripping over a spill on an aisle.

These are the types of scenarios that we [00:18:00] monitor and the AI model looks for and alert the stores at.

Bill Pfeiffer: And it sounds like by and large. You at least have the potential of using existing camera infrastructure that's there rather than installing something net new just to do

Sridhar Sudarsan: this. Absolutely. In fact, that was one of the foundational principles when we started building this technology is to use existing cameras.

Our whole philosophy was, there are, as I mentioned earlier, Potentially a billion plus cameras in the world, so what we don't want to do is go to a customer who has cameras and say, okay, the first thing you need to do is rip and replace all your cameras. So, and that, because that's cost that they have to incur and it's a lot.

It's less expensive to process things in a software than rip and replace hardware and wiring and networking. So we support, you know, many, many types of cameras, [00:19:00] multiple generations older, including analog cameras. And you'll be surprised at how many of those there are out there.

Bill Pfeiffer: Well, if they don't, if they're not broken, then why not?

They're already hanging there. Just leave them on. That makes a lot of sense. I've heard of cities that have four or five, six cameras at. You know, the same intersection pointing in basically the same direction, just doing different things. And that just seems, and grocery stores as well, right? Really tight margins in grocery stores, and they'll have a loss prevention system and a physical security system and an inventory system.

And they're all using separate cameras, kind of pointed in the same places. And that's just, that's a whole lot of overhead that you're going through, and it's a much more. It's a much more environmentally friendly solution if you can just use what's already there. It's a much less expensive, cost effective solution if you can use what's already there.

That's fantastic. I love it.

Sridhar Sudarsan: And it's a faster time to value, right? Oh, [00:20:00] sure, yeah. Which is the most important thing that I think in many industries, I think one of the challenges that people have is While the software can showcase well in a small area or in a test condition, in test conditions, how do you actually, how does it behave when you deploy it at a larger scale?

So the other thing is also about how quickly you can deploy it, and number two, how large is your deployment. So, the fact that we've deployed at across 140 plus thousand cameras. across about 16 odd countries. We've kind of learned, you know, a lot of things along the way, and I think that's what the software is now designed for, is for scale.

That's very

Bill Pfeiffer: cool. So how does a typical company develop an economic model that supports edge deployments with AI? How do they quantify the value that's generated, the return on investment, the use cases that they can reasonably, affordably fund that where it makes sense to

Sridhar Sudarsan: pay [00:21:00] for? Yeah, it's a great question because that's what it ultimately boils down to is, do I need this?

Do I need this now? And how quickly am I going to start seeing the return on, on my investment? And so the way we look at it is. Because another thing we didn't talk about earlier is the fact that we don't store any videos, right? Our solution does not store a single video. So what we do is we deploy an edge box, you know, an edge box that gets deployed at the location, either, either near or close to the DVR or the NVR or the VMS system that the location may have, whether it's a building, a school or server room, etc.

Or it could be in a centralized monitoring center, depending on how the, how large the, the set of buildings are or how small they are. I think there are different types of configurations. And so that edge device is a fairly standardized sort of edge device. Of course, we have certain specifications that we endorse and we test our [00:22:00] products on, a variety of our products on, but it's not something that is It's highly custom and only available from one place or the other, right?

So that kind of allows people to start thinking about it. What that also does is because there are multiple players, the prices are, as I mentioned earlier, continuing to sort of get more and more competitive and affordable, even for, you know, from a C store or retailer perspective who are very low margin play, it becomes very viable and very affordable from that standpoint.

The other thing is we use. and edge device for multiple cameras. So depending on how many scenarios you have, it's not one edge device per camera or anything like that, right? It is, it could be anywhere from, you know, 10 to 40 cameras per edge device within a location that we might have, depending on what kinds of scenarios you've seen.

Set up and how many frame rates per second are being processed by each scenario and so on. So there's a whole sort of calculator that we have around that. From a return on investment, from a business standpoint, I think there are multiple entry points that we've [00:23:00] seen. For example, you know, if you take safety, there's cost of safety, depending on what location you're in, right?

I mean, you cannot put a price to a life, but you know, when, when there are industrial environments, when there's a school, worst case scenarios where kids are, and you have incidents, security incidents, not safety, but security incidents, you know, then Even one of these that could be avoided through a proactive alert is a valuable sort of investment, if you sort of think about that.

But even if you go beyond that, I think if you look at safety, slip, strips, and falls make up about 70 odd percent of claims that happen in our workplace, right? Whether it is by customers or whether it is employees, which is... through OSHA, for example, and so being able to prevent and avoid those and every one of those claims translates into some cost for the business.

So being able to prevent those is a great way [00:24:00] of another factor of an ROI aspect or a one of the elements where, you know, they're seeing value. Another example would be around top line growth. Right? We've seen, for example, I mentioned in the case earlier about breakfast items or lunch items or dinner items, even in a C store, right?

With a thin margin, those are one of the highest margin products, along with beer, and being able to have a customer or two walk away a day and seeing that you are not converting those customers is money lost. So that's a top line sort of impact. And we've actually run some tests and we've seen that at those C Stores, for example, there's a 14 to 18 percent increased sales that they see just based on one item that are monitored as long as people are sort of responding to the alerts and retrofitting the, yeah.

Bill Pfeiffer: That becomes a great ROI by itself. That's

Sridhar Sudarsan: fantastic. Exactly right. Right. So [00:25:00] So what we've done is we've built an ROI calculator that takes a lot of these factors into account and also lets you sort of indicate how many locations you have and so on and you can calculate it. So most businesses, that's what they do.

So we take into account the edge deployment, the... Software costs and all of that, but also more importantly, take all of these different factors. And we continue to learn new things every day in terms of what are we monitoring that we didn't really think about. We learned from a retailer, which then was validated by many others is that there it's all labor is one of the biggest challenges that retail face.

Many industries face it today, but retail does too. And being able to know idle time of a cashier. Standing behind a register, not servicing a customer is also important for them so that they can use people to sort of allocate other tasks and save on labor costs where possible, right? And I think many stores are finding it hard to train people.

So it's these kinds of A [00:26:00] variety of different factors that go into a calculator that we have. Some things may apply for them and some things may not. You know, from a school safety and security perspective, as I mentioned, I mean, there it's just important. I think since we've seen, you know, since Columbine, for example, there's been just under 350, 000 students that have been exposed to some kind of violence, and it's a very unfortunate sort of reality that we're living in.

So how do you sort of. Find ways to stay on top of and stay ahead of potential behaviors or suspicious activities. And how do you sort of ensure that you're trying to stay ahead of sort of the risk or the threat that might occur? And so whether it's weapons inside the rooms, outside in the compounds, in the parking lots, I mean, all of these different things are what goes into our.

School safety product, right? Being able to determine if there's an intrusion happening. [00:27:00] Now, it's, it's hard to put a an ROI to each one of these incidents. I don't think that's how safety and security works. But it is as, as a variety of the school boards are looking at how to increase security and monitoring within their own buildings.

As parents are reaching out and trying to find sort of ways to better protect our, we're all trying to protect our kids better. I think these are sort of very different way to look at what is the value generated. which is really ultimately the safety, right, of the students in this particular case. And it's

Bill Pfeiffer: amazing how much of the AI can carry through, if you think about the use cases in certain ways.

Right? Using the AI to identify an object. The object can be a tool that you want the manufacturing person to know where it is, or a type of spill, or a weapon. And it can, you know, apply to such vastly different use cases. It's, it's kind of fascinating. So you were digging into the [00:28:00] school safety conversation, and that's, that's a really interesting one because it can get super contentious, right?

Everybody wants... That's school safety. Nobody wants school surveillance. So privacy becomes just a huge, huge issue and lots of legal challenges around that and ethical concerns and things like that. How do you deal with keeping this as, you know, you can use AI to do amazing things? Or it can feel like creepy surveillance.

How do you make it more about the amazing things and keep everyone very comfortable about the privacy, the safety, the security of all the data that's being

Sridhar Sudarsan: created? It's a great question, Bill. I think that, you know, and it's a very important one when you look at any data and becomes even more important when you're talking about video data, right?

And I think, as you said, it is something we have... Thought about from the very beginning and continue to think about, you know, every single [00:29:00] day in terms of what kinds of things we can do. There are some very clear choices that we've made in our technology to help address some of those concerns, right? And I think we continue to sort of do more and more, but there are some very clear ones that already set people at ease.

So number one, I would say As I mentioned in the very beginning, that these cameras are already recording videos and they're available for people to see. So let's assume for a second if you don't use any AI. And if there is something you need to go look at, whether it's for training purposes, for, you know, forensic analysis on certain incidents, or you just want to find out what happened, it requires people to spend many, many hours watching and going through videos at all times, right?

So that already sort of is in place. and has been in place for many many years. The thing with AI is you may not need to ever do that because nobody, no human, is actually looking at videos [00:30:00] and all we're looking at all the AI, as the AI system is processed, these AI models that we deploy at the edge, they're live streaming, they're taking the live stream feeds from the videos.

The models are running and detecting certain things, right? Fire, smoke, suspicious activity, a weapon, an intrusion. It is detecting, you know, a vehicle going too fast or a vehicle overstaying at a gas station or any of these, you know, scenarios or anomalies or alerts that I mentioned. It's looking for those and it's creating an alert around that and then creating a snapshot of that.

Particular alert. And that's it, right? There is no video that gets stored, and we've made a very conscious choice of doing that and not storing a single video as part of it because of exactly what you said earlier, right, is the concern of Well, are we looking at all the videos, and are we actually monitoring everything, and so on and so [00:31:00] forth.

So that's number one. Number two, I think the other thing that we've also done is even in the evidence that we store, I mean, there's a choice. You can say, I don't want any evidence. I just want to know that something has happened, and alert me about that, right? The reality is that... Again, as I said earlier, that nobody likes a black box AI.

People want to know. I think people always want to know an explanation. Otherwise, it's a little bit of a he said, she said type back and forth that can happen. So you can choose to not have an evidence of the snapshot, or even in the evidence of the snapshot, we have a technology that blurs out and anonymizes.

Sensitive things, faces, name tags, name brands, license plates, vehicle license plates, etc. So any of these kind of identifying factors that might be within the scene that that camera is looking at, they are anonymizable fairly, in a fairly straightforward way through a simple switch that we can set up.[00:32:00]

And there are companies, as an example, when we deploy these in certain environments where there might be unions. They may actually want that. They may want to know that there is a safety incident, some person getting too close to a machine that they shouldn't be getting too close to and so on, or walking into a zone where they shouldn't be there.

They just don't want to know who it is. They just want to use that as a way to educate and not as a way to penalize the person. So I think it's all possible and the technology makes it possible. You know, of course, there are also ways to completely anonymize. Alerts, and you may only want to look at things for more aggregated instances and I, or completely eliminate alerts and look at aggregated instances.

There are levers and ways and switches that you can turn on and off. We have encountered and we have addressed a number of these concerns in a variety of different countries. I mentioned countries in North America, in Europe, in Asia, that we've deployed these solutions. There are nuances [00:33:00] there as well, right?

Bill Pfeiffer: Right. I guess it takes a whole lot of flexibility. To your point, there are just cameras every place, and yet we're very focused on privacy and we don't want us to be caught on camera. Dude, look around. Come on, there are cameras every place. So yeah, finding, finding the balance there and the things that you can do with that.

That are amazing, and we don't want to give that up, right? I want to be safe and secure, but not on camera. Hmm. Okay. So, finding the happy balance of that trade off is, is a really slippery slope. I'm glad, I'm glad to hear you've put so much thought into it. So that's privacy. But then every time we talk about AI, we have to bring up ethics, right?

You want to make sure that you're sourcing your data ethically and training your AI ethically, and making sure that you're not introducing bias, but that every data set has some bias and ethics are subjective and subject to [00:34:00] change. And how do you make sure that you're You're training your AIs ethically, what does ethically mean, and how do you know when you've achieved it?

Because it's a moving target.

Sridhar Sudarsan: Yeah, it's a great question, Bill. I would say that, you know, we can probably spend hours just going through that. And obviously it's a very open topic and there's a lot of viewpoints on that. I think there's a couple of things from our standpoint that makes things a little bit more narrow from a focus standpoint.

For example, what we don't do is. We don't have a B2C type app where, you know, we're just putting something out there for anybody to start using, right? Most of our solutions, as I described earlier, are for businesses or enterprises or, you know, entities. so to speak, B2B type of scenarios. When we train for the types of scenarios within that space, you know, we use a variety of different techniques to ensure that we have the [00:35:00] right mix of quality, accuracy, recall, and the reduction of, you know, and the right sort of metrics around false positives or false negatives that we need to sort of.

Serve up for a certain situation or scenario, you know, including generative AI techniques in, in areas where you cannot actually get basic training images from anywhere. So, so maybe I'll elaborate a little bit on kind of how we do the training and I'll talk about kind of the ethics part of it, if that makes sense.

So in many cases, when we are for most of these use cases that I mentioned that fit in either security, safety, productivity, efficiency, or situational awareness type. scenarios that we have out of the box, these are already trained and continue to evolve as we go further, right? So these are more or less, you can call them out of the box.

Some might require certain tuning depending on exactly what location it is in, and so on. They, in many cases, they work, you know, as is, which is going back to that time to value being very, very quick. [00:36:00] Now, As we continue to encounter new scenarios and train for that, we use a variety of techniques. So, for example, if somebody wants to train on smoke or fire in a certain location, it's not always easy to go create a lot of training data and just light fires and

Bill Pfeiffer: create smoke.

Yeah, start setting fires to show them what

Sridhar Sudarsan: Especially in hazardous areas, right? If you think about chemical refineries or, or, or gas stations, for example, obviously you don't want to even have somebody smoking in that area. So what we do is we use generative AI techniques to be able to create these kinds of images that will allow.

The system to learn and understand what these realistic situations could or would look like, right? So that's one sort of approach that we take. There are some other areas where there's a very specific type of detection that is required. And so we train in a variety of different types of environments.

So for example, when you talk about [00:37:00] weapons and firearms detection, it's a very, very tricky problem to solve, right? Because In most cases, it's not like people are walking in, bad actors are walking in with their weapon or firearms fully in display for the camera to see and recognize. There's times when it's hidden or it's concealed or it's sort of, you know, the person's turning away from the camera and the camera doesn't have a good view off it.

So, Being able to use ways to sort of detect that and identify and remove the false positives. Cause you don't want to generate too many of these alerts and have people lose trust in the system as well. So I think there's, there's the balance that we continue to sort of work on. In terms of from an ethical perspective, as I mentioned, we work on very, especially if there are very sensitive environments from a customer or sensitive videos from a customer.

Those are used very specifically kind of for them, right? And it's not something that we make available to others [00:38:00] and so on. But when we do, we obviously ask permission, but not for sensitive stuff. I guess by definition, people are not willing to share that. So we take a lot of those kinds of things into account from a.

Facial detection and vehicle recognition and these kinds of things, we don't have the challenge that some of the more B2C type applications do have where, you know, they have to detect certain train on certain kinds of, you know, people of different sort of age, gender, race, color, and so on and so forth.

And I think those are models that we use and then we build on and we enhance. And this technology has evolved significantly over the years now that I think we get very good results as we sort of demonstrate. Another thing from an ethical standpoint that tends to come up is who actually is exposed to the alerts, right?

Let's say there is a, an anomaly that is detected or an alert that is [00:39:00] generated. You don't want to send that alert to people, even if they are within the enterprise who are not authorized. to receive those alerts. So being able to build the appropriate guardrails and only those people with the right authority levels.

These are getting those notifications and alerts so they can see who or what happened at a particular time is another interesting thing. So it's not, this is not a training time, but this is at runtime. And so when we talk about ethical situations, you've got to look at it across the board. And I think training is only one aspect of it, but there's also the entire operationalization of how that works.

plays out. So the system has been built in a way that there's a number of different levels of, you know, authority levels or authorization levels, as you would say, and being able to sort of configure these alerts only for those appropriate people to receive it, the people that are authorized to see these alerts is another example of what the system already has.

So as I said, I think we continue to sort of find and [00:40:00] discover new things, but we have along the way Put a number of different guardrails that prevents any of these ethical considerations to be questionable, at least based on what we are doing.

Bill Pfeiffer: Okay. So kind of jumping tracks. I have to ask, what's your favorite deployment so far?

Is there something that you were just super proud of, or you walked away thinking, I can't believe we could actually do that. We made it work.

Sridhar Sudarsan: Well, it's, it's a tough choice, right? I think there's so many of these deployments and some of them come with degrees of complexities that you don't really anticipate.

There was a particular deployment that we were doing here in the west coast about a year, year and a half ago inside a, a renewable sort of storage facility. And being able to handle some very complex environments where you're trying to detect fire, electrical arcs, trying to detect smoke coming from like a very tiny little crevice and [00:41:00] being able to detect that very challenging, very, very complex environment versus another environment where.

You know, as an example, in India, we have a deployment that goes across about 12, 500 locations. It's a single customer, it's retail, gas stations, and it's pan India. India is a large country. That's some serious scale, yeah. Yeah, it's an incredibly humbling experience to go through that kind of deployment, but it's also a very...

Challenging environment and being able to see that come through and, and get deployed. I mean, it's, I'm just proud to be surrounded by a fantastic team that thinks about all kinds of different scenarios, different environments. So there's the science part of the exciting areas that I enjoy. And I look forward to kind of seeing, wow, we can actually do that.

And it's amazing that we see that. versus there's the scale part of what we do, which is, which is sort of a second factor. And then there's the engineering part of what we do, which is a third factor. [00:42:00] And so if I put these three together, I think there's a number of different examples that comes up, right?

There's another example of a retail store where we caught a burglar in the act, right? And this was a jewelry store. We had two or three triggers all within a few minutes, and then appropriate authorities were notified. And they actually arrived at the scene and they caught the burglar in the act. So nice, a good ROI right there.

Indeed, right? So being able to see things like that, you know, every day, I think is fascinating. There's another gas station where we saw smoke coming out and it was actually a terminal and smoke coming out from behind like about 100 meters away. And there were, there were about 50 tanker trucks that were all, you know, obviously carrying fuel that were just within a hundred meters.

And it turns out that on the other side of the wall, there was somebody cooking on a campfire, for example. So being able to, being seeing, you know, real examples like these come through. [00:43:00] Every time from customers, it's fascinating. And to me, that sort of feels very real. Another customer that we had deployed this, they used to get the third party to do audits for them and pay several hundreds of thousands of dollars to do the audit multiple times a year.

And based on the fact that now they're getting 24 seven reporting of what they need. They decided that they don't even need to do the audit anymore. Again, another example of something we weren't even thinking about when they did the ROI calculation, but they canceled it. They canceled the entire audit contract.

So I think these are excellent examples of what we see and hear from our customers. When we hear about things like zero safety incidents, you know, to me the icing on the cake is the fact that zero safety incidents in an entire year, year of them running with our solutions compliance going up by about 40% footfall into retail locations growing by 3% from before they did the [00:44:00] solution.

When customers share this information with us and, and tell us kind of the value that they're getting. To me, every one of those brings a smile to my face and I feel very, very good about. Even just

Bill Pfeiffer: being able to measure some of that stuff is pretty amazing. To be able to say 3 percent more footfalls than in previous quarters, in previous years.

You can actually watch how many people walk in and walk out and don't touch anything. Or, you know, walk in and actually engage with the store. Walk in and purchase from the store, as opposed to just monitoring sales. It feels busier, it feels less busy. You can actually start to quantify that and start to do some real experimentation.

That's fantastic. So the state of the art is really, it's come so far. And I love how much of that you've seen, you've been a part of, you've actively driven.

Sridhar Sudarsan: That's... Yeah, yeah. Long way to go. You've come a long way, but there's a long way to go ahead of us as well. Yeah. And it's,

Bill Pfeiffer: it's speeding up, it's moving faster, and it's going in really cool [00:45:00] places.

I can't wait to see. Sridhar, thank you so much. This has been a fantastic conversation. It's given me some interesting things to think about. How can people find you online and keep up with the latest stuff that you're doing as you're doing it?

Sridhar Sudarsan: Well, first of all, Bill, thank you very much for having me here on the, on the podcast with you.

It's been, it's been really fun sharing a lot of the work that we've been doing and, uh, kind of where we're going as well in terms of where we can, where you can find more information about us. I think if you go to sparkcognition. com, that. We'll show you a lot of the various work areas that I've mentioned today and also a lot of other things that I've not mentioned today.

I would welcome you all to go look at sparkcognition. com.

Bill Pfeiffer: Love it. Thank you so much. Thank you for the time and for the perspective. Thank you, Bill.

Sridhar Sudarsan: That does it for this episode of Over the Edge. If you're enjoying the show, please leave a rating and a review and tell a friend. Over the Edge is made possible through the generous sponsorship of our partners at Dell Technologies.

Simplify [00:46:00] your Edge so you can generate more value. Learn more by visiting dell. com slash edge.