Over The Edge

AI's Role in Sustainable Building Management with Jean-Simon Venne, CTO and Co-Founder at BrainBox AI

Episode Summary

AI is revolutionizing building energy management. In this episode, Bill sits down with Jean-Simon Venne, Co-Founder and CTO at BrainBox AI about their cutting-edge AI solutions for energy efficiency. They dive into current AI challenges, the critical need for defining AI's purpose, and the impact of predictive and preemptive control. Additionally, they discuss how to balance AI power consumption with efficiency gains.

Episode Notes

AI is revolutionizing building energy management. In this episode, Bill sits down with Jean-Simon Venne, Co-Founder and CTO at BrainBox AI at BrainBox AI about their cutting-edge AI solutions for energy efficiency. They dive into current AI challenges, the critical need for defining AI's purpose, and the impact of predictive and preemptive control. Additionally, they discuss how to balance AI power consumption with efficiency gains.

Key Quotes:

“The bottleneck is now on your capacity to find the right mix of technology to assemble a new solution.”

"You could very rapidly deploy AI in thousands and thousands of buildings, without any bottleneck, and get the first layer of 20, 25 percent energy reduction.”

“I think where the future is going in terms of optimizing is not only the building at the building level but optimizing the behavior of the building. So the grid could be optimized.”

--------

Timestamps: 
(01:48) Jean-Simon's career journey

(04:22) Current bottlenecks in AI

(07:07) Bias in AI models

(13:22) Understanding the complexities of building operations

(20:44) Factors influencing AI predictions

(25:20) Energy consumption in buildings

(28:12) Clustering buildings for grid optimization

(35:00) Developing specialized LLMs for building management

--------

Sponsor:

Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com/edge for more information or click on the link in the show notes.

--------

Credits:

Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.

--------

Links:

Follow Bill on LinkedIn

Follow Jean-Simon on LinkedIn

Edge Solutions | Dell Technologies

Episode Transcription

Producer: [00:00:00] Hello and welcome to Over the Edge. This episode features an interview between Bill Pfeiffer and Jean Simon Benn, the CTO and co founder at BrainBox AI, a company that leverages AI to make buildings smarter and greener, reducing energy consumption and cost. Jean Simon has over 25 years of experience in technology.

He's led the development and implementation of technologies in telecommunications, biotechnology, and energy efficiency. Before founding BrainBox AI. Brainbox AI, he played a pivotal role in integrating M2M technology into more than 200 smart buildings across North America, Europe, and the Middle East. In this conversation, he and Bill dive into the revolutionary power of AI in energy efficiency and Brainbox AI's innovative solutions for energy management in buildings.

But before we get into it, here's a brief word from our sponsor.

Ad read: Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with Edge solutions. From hardware and [00:01:00] software. Are to data and operations across your entire multi-cloud environment. We're here to help you simplify your edge so that you can generate more value.

Learn more by visiting dell.com/edge for more information, or click on the link in the show notes.

Producer: And now please enjoy this interview between Bill Piper and John Simone. Ben the cto, o and Co-founder at brainbox ai.

Bill: Jean Simone, thanks so much for joining us. I know you've come here from Brainbox ai. Which we'll, we'll talk a lot about your work in the trenches of building AI, but starting out, it's always fun to hear a little bit of history. Can you tell us how you got started in technology? What brought you here?

Yeah,

Jean-Simon: absolutely, Bill. Uh, I think, uh, technology was always part of my My life from university all the way to today, I always try to use technology to resolve different problems and got a lot of frustration along the way. Right? [00:02:00] So back in the eighties, it was always like, Oh, we could do technology to do this and to do that.

But then you realize that technology is not ready. Or if it does exist, it's so expensive to use that really it's not viable. So I would say like this kind of playground sandbox of playing with the technology got very, very exciting when we got by 2015 because suddenly the The capability of the technology is reaching a certain level.

At the same time, the cost of using that technology is becoming so low. When you think of storage, right, like data storage, I think 20 years ago, what was the cost to store one terabyte of data? Uh, there would not be any CFO that will let you basically play with that capacity. The concept of a terabyte of data back then was pretty mind blowing.

Yeah, and today, really, what, you buy that at a little stick at home, you know, at Office Depot, and it's, how much is it, right? So it's really like a non [00:03:00] event, right? Yeah, you can have more than that on your phone. Exactly. So, so really now we're kind of this world where the capability and the cost is, is so overwhelming in terms of it's not a limitation anymore that, that really we're, we're like the bottleneck is now on your capacity.

To find the right mix of technology to assemble a new solution. And that's very exciting, because the first time I think that we're really reaching that level, if you look back at the last 50 years, where we're now at a bottleneck is really your own capability to brainstorm and find the right assembly of technology to resolve a problem.

Bill: It's interesting that you mention the bottlenecks and it's funny to think about them, right? Compute was a bottleneck. Storage was a bottleneck. Not having enough data was a bottleneck. Not having the right algorithms to process the data was a bottleneck. And they all sort of released at the same time.

Great coincidence, but I'm sure it's not a coincidence. I'm sure it was, you know, somehow they were moving in lockstep and something was, was pushing us forward. [00:04:00] So, Looking forward, what's the current bottleneck and how do we get past that? I

Jean-Simon: think that the current bottleneck is now figuring out how we want to apply the technology.

I think we're in this world now where it's so powerful, the technology, and there's so many things we could do with it that we have to ask ourselves the question, I think, What do we want to do with it? What's the purpose? Because there's a lot of people that are just pushing the frontier for pushing the frontier, right?

And the question I very often ask, let's say when my team comes, Oh, we should play with this, and basically I was like, Okay, so what is the problem you're trying to resolve? Or you're just very excited to play with this because it's kind of new, right? Can versus should. I can do this. But should you do that?

Is there any reason? So, uh, so how do you, uh, I think, I think the key question these days is like, how do you put the proper [00:05:00] safety and guideline? So you're going to use it. In a, in a productive way, in a sense that it's really going to help or it will not help. So I think that before there was really nobody really asking these questions because everybody was focused on trying to push the frontier of technology.

Now we're really getting into that. Okay, we really have to have this discussion and really Unfortunately, very few people are doing these discussions. How much does it cost to compute? Let's just think of the journey of AI, right? It's a question that we need to address.

Bill: I've been expecting for probably over the next year or so, people are going to start to really realize.

How much it costs to train a model, to run a model, and that's going to start a very different conversation. I think sustainability is going to come back again, right? Last year was a big thing because Europe had a natural gas spike in pricing and drop in availability. That caused all sorts of problems.

They solved the natural gas thing and sustainability kind of went quiet. I think it's going to come back when people realize like, wait, AI could [00:06:00] actually light the earth on fire if we keep doing this. And I like what you were saying about the, the bottlenecks again, right? The bottleneck of. We need to figure out why we want to do AI, what we're going to do with it.

And that's something that AI can't really help us with necessarily because it's good at summarizing things that happened in the past and guessing some of what's going to come in the future based on trend lines. But if there's not a trend line, AI is not going to be super like out of the box, black swan creative.

We have to do that ourselves. Two very different things, and they'll probably end up moving forward right at about the same time. Like we'll, we'll see them unleash in a couple different gates of innovation. That's going to be fun to see. Yeah, it's, it's interesting

Jean-Simon: because people sometimes, you know, they ask us how do, how do they train this model as well?

They train it with the existing data or existing behavior data. So how do you want to train these models? Do you want to train them with all of our bias or not? And we've seen this in the past, right? You look at the last [00:07:00] six, seven years, uh, these model are trained to, let's say, to do some predictive. I'm not talking about gen AI here, but I'm really going back to deep learning.

You're training a neural net to give you a prediction or give you an insight based on a training set. The model will, will learn. But if, if your training set is full of bias, the prediction you're going to get is going to be full of bias because that model is just basically telling you what will happen based on the past, as you mentioned.

So, so we have to be very careful here because how do you strip the information from bias to give it like a pure data set without any bias? So you could get, you could get basically a prediction, which is unbiased, but then again, there's so many bias. And some of them are, are linked to culture or religion or, so maybe some people don't want to have the prediction on bias.

They want to have the prediction with bias. And then you get into that discussion, like, whoa, whoa, whoa, how are we going to balance this? And then you, you, you go directly about the [00:08:00] human behavior and, and what does the human want? And there's different version to that question in terms of what is the right answer.

So imagine the poor AI, like, like, I need to balance a model based on all these conflicting version of the answer and there's no real good answer. There are several

Bill: answers. And the ways that we can introduce different biases, cultural and ethnic and all sorts of, like, you know, age based and just technical.

I was talking to a company years ago that was doing AI working to read radiological scans. And they noticed a really weird quirk and they traced it back and they traced it back. Every now and then it would just say, there's cancer in this and there would be nothing. And they finally figured out when they trained it, it learned that if there were pen marks on the scan, there was probably cancer because the doctors would mark it up to say, here it is.

And so the computer just said, well, if there's a pen mark, that's an indicator of cancer. Yeah. Yeah. Yeah. Yeah. That's [00:09:00] not quite what we're going for here. No, exactly. You know, humans just never thought of that. We automatically filtered that out as just background information. And so, you know, then you think about all the aspects of people and culture and economics that we just don't necessarily think about.

Why don't you hire more people that are different than you? Because we've only hired people like me in the past. That's how we trade the AI. I don't know.

Jean-Simon: And it's funny because the, like when we created Brain Bucks, we, we were looking at how do they put together like an autonomous car, right? And the discussion then we had with a couple of people that were working in that industry was like, and they were telling us, and they did, their problem is like, What is the perfect way of driving because we want to teach the different algorithm to drive properly.

So what is driving properly? The problem is, is when you look on the road, there are several answer to that question. And it's like becoming like very complex because if there was only one way, this is the perfect way of driving all the time. [00:10:00] It will be much easier. This is not true. There is an adaption, you know, you need to adapt Mm-Hmm.

To present condition to the people surrounding you. And, and then it become very complex. So how far do you adapt? You know? And it is basically what we call like in context learning. So, and it's a fascinating topic, but it is kind of showing our complicated it is to teach the AI to do

Bill: something like the human.

Mm-Hmm. . And another, another one of those instances of you were saying, you know. Is there a right way to drive? Mining trucks are, are fantastic examples of where you can automate, right? Because they have a set path and there aren't people there and things like that. So the likelihood of an accident, very small.

And they actually had to ingest jitter into the way that they drive because they found that they were creating these massive ruts down the Perfect center of the road. And so nothing else could drive that road and only the AI could drive it because you had to follow the routes perfectly. Yeah. What's, what's causing these ruts while they go [00:11:00] just there.

Can you imagine if AI started driving on the roads and it was all perfectly down the side, there would be ruts down the pavement.

Jean-Simon: Imagine all the, the semi trailer, like all driving exactly at one inch, like precision at the same place every day, day and night. Uh,

Bill: they will destroy the road. Such a fascinating, it's a weird set of problems and weird things that we never think of.

So you work primarily, or at least partially, I think primarily on the operational systems in buildings, which is, there's so much going on there because it's a complex physical system, but then there are people. And we do weird stuff and I just, I can't even imagine the complexity of trying to automate that in a way that is quote unquote, right.

And you know, trying to save power and such. So what sort of problems are you typically trying to solve and how do you do it?

Jean-Simon: Yeah. So what we, what we've done is seize the opportunity that pretty much all commercial institutional building already have a [00:12:00] control system, meaning they're digitized. So think of all of the air handling units.

So all of the heating, cooling, the fan, the pumps, all that system already is controlled by a digital layer, which is generating a ton of data. So these controller, where the data reside are most of the time just in the building and they stay there. And so we, we connected to these controller and we, we extract the data and we train in the cloud.

These neural network with that data that we're extracting, so they could give us a prediction. So we basically, the first step is we train these models, we get a prediction, which is telling us for each of the room in that building, how the temperature and the humidity will fluctuate through time. And we could push it like, So we're really focusing on the next six hours and how the existing system, which is conditioning the temperature in your room will react to maintain your desire or settling, which whatever it is.

And having that prediction, we kind of see the [00:13:00] future. So we then, what we do is we use that prediction as kind of the radar and we run all the, Possible combination of controls. And with that, we basically are trying to figure out, out all of the possible control sequence that we could possibly do for that room.

What are the impact on multiple objectives? So back to your question, basically, like, How much energy will we consume to maintain that 71 degree that that person wanted in that room? Are we going to create any spike for the utility on the electricity side? And in several places in the US and in Canada, you've been punished at the end of the month where you recreate the spike because the utility, they don't like them and they will charge you for it.

So you want to, you want to lower the spike if possible, which means don't run all the equipment exactly at the same time because you're going to create that spike, right? So, Can we balance, load balance that so the utility will not punish us at the end of the month? Lower the emission, improve the comfort, so if there's this period it was a bit [00:14:00] too hot, a bit too cold, can we cancel them out?

And then the cycling of equipment, so we like to, electrical motor, like fan and pump, which, They, they really wear and tear based on how many times they do stop and you start them. So what we see in, in most of the, the equipment, the way they are being operated, there's a lot of stop and start time. So the more you do that stop, start, the more you basically are going to reduce the lifespan of that pump or that ventilator.

So you're going to need to replace it. Let's say after eight years, instead of 10 years. So stop, start is harder than just continuous run. Exactly. So it's, it's an electrical motor, right? When you start. When you start the engine, there's, uh, there's electricity that goes in the motor and it basically do a kick and that kick, there's X thousands of kick the equipment could, could go through after that you need to replace it.

So, the more often you do these kick in, the more, the longer, the shorter will be the period of that equipment in terms of lifespan. [00:15:00] So, you know, you want to avoid and you want to prolonge as much as possible. So these multi objective, we're basically doing the optimization. We're saying for all of these objectives, what is the control sequence that I could apply right now that will give me the best result for all these objectives at the same time?

So it's a give and take, right? They cannot optimize everything at 100%. So you kind of give and take and you try to get your best, your best equation based on all these objectives at the same time. And then we, we decide which one is the best control strategy and then we apply it. So meaning that the algorithm will write back to your equipment and operate them according to that new strategy.

And we go five minutes further up in the time and we redo the entire exercise that I just described to figure out if we should make another adjustment. And what do you get out of that kind of process is really like you're flipping from a reactive control to a preemptive control. So most equipment today with the way they operate is reactive, meaning the temperature is starting to be a bit too hot in your room.

The [00:16:00] system will detect, oh, it's becoming too hot, so I'm going to start the cooling to bring it back down. That's his reaction. What we do is we try to see event where it's going to be a bit too hot in your room and we see like in 62 minutes it's going to be too hot. We then make a strategy to make sure that that event will never happen.

So it's a bit like the movie Back in the Future. We go in the future, we see how horrible it is, then we come back in the present and we come up with a couple of things. Can you tell us a little bit about MVNOs? Can you tell us a little bit about how it works? The kick of the multi objective, we go on the emission saving up to 40%.

So we're saving more on emission than the dollar for the customer in percentage wise. But that makes a lot of sense because the AI is saying like, wait a sec. Yes, I need [00:17:00] to save dollar for that customer, but I also need to save emission. And very often you have like one rock, you're basically going to get two target.

Give you the example. Time of day. So there is period during the day where your energy, your electrical energy that's coming into building is very green. This has been manufactured by or produced by solar panel or windmill. There's other times during the day where that kilowatt that's getting into that building has been produced by a very dirty power generation source.

Think of coal plant. So the AI knowing that, monitoring that in real time is saying, well, to save that dollar, I'm going to save it when it's coming from cold plant, because I want to also optimize my emission. So you start to see like how clever the AI is. And that's why AI is very good at video game, right?

Because it, it figured out very quickly what's the best strategy to win, because it's looking at all the possible combination, which we're not doing as human. And that's why we lose against the computer in video game, because we're looking at some of the alternative, but not all. [00:18:00] All of them. And the AI is looking at all of them.

So it will probably find one that we didn't thought of and we're going to lose. So it's, it's very, very good at computing a lot of information in real time, which us human were not

Bill: that good at. And you had mentioned that you're going into places where everything's already digitized, which I presume means basically it's instrumented.

It's all fed into a single control room and there's a person sitting there and they said, wow, you can control it all from this one place. It's so modern. And now. Put a computer in there and watch. So you're watching, you were talking about power feeds and power prices. What other factors do you pull into there?

Are you watching like for temperature spikes and when cloud cover breaks or student schedules, when the room's going to get full or actual occupancy? I mean, what, what sorts of things do you, do you look for and how far ahead can you look? So we

Jean-Simon: try to, to, to get as much information as possible because we don't know, you know, each building is kind of a [00:19:00] snowflake, right?

So you don't know what is the, the, the logical trigger that's going to create a bit too cool, a bit too hot in, in each part of the building or in each building. So we, we extract an incredible amount of information, not only from the building itself in terms of all of the trend data, but think of the bill you mentioned that.

So. We, we get the bill information because what's happening is like, think of natural gas, the price vary through time, right? So when you receive your bill in natural gas at the end of the month, the fluctuation of the price keep going on during the month. And the electricity is usually more stable, but still on the electricity side, you have these, these rate, which is time of day, like in Toronto.

Your kilowatt is a different rate in the morning than the afternoon. Depending on what time you're consuming that kilowatt, you're not paying the same price. So we're extracting all of these bill and rate data. We're extracting also, and I was mentioning that like, how the electron that you're consuming right now has been produced.

So there's this NGO in California called [00:20:00] WhatTime and we did a partnership with them. They give us in real time, the electron coming in that specific building was manufactured or produced by what type of power generation system. So it's very important to put a collar for the AI to figure out that all electron coming in that building during the day are not equal.

They do not match. Uh, vary in terms of being green or dirty. So you want to know that in real time, because that's part of your equation to figure out when should I go maximize my emission, right? At what time during the day? Mentioning the weather. So we have to literally go get the weather of the same type of weather that the airline pilot are using, because yes, we want temperature and humidity.

We want wind, speed, wind gusts, wind direction. But a thickness of the cloud. We discover very quickly is a big driver because it basically, we already know where the sun is at any given day because it's a, it's a fixed orbit, right? So, so we derive the position of the sun at any given time, [00:21:00] depending where the building is.

What is changing the impact of that sun radiation on the building is really the thickness of the cloud. And that will make a huge difference on the cooling and ventilation and the heating, depending on the season, depending on that cloud thickness, which will vary during the day. So you need to know that.

You need to know that in the morning, super cloudy, radiation, zero impact, but then this afternoon it's going to clear up and the sun's going to hit that building and then you're going to have a hell of a cooling challenge. Probably at a time which is the worst for your dirtiness of your electron coming on that building.

So what about we're doing pre cooling? To be as the best strategy, I don't know, that will need to be calculated, right? So even though building are very fixed assets, when you compare them to airplane or car, the way to optimize them is a very dynamic equation. That's always surprising, right? So on the user side, we try to stay very shy because yes, it would be [00:22:00] great to, would be great to have the exact position of everybody and what that person like and doesn't like.

But then when you're going into the privacy issue, so we basically decided like, we're going to stick to the set point. So the people on the thermostat on the wall decided that they won 73. So we're going to chase that. And we believe that If people want it colder or hotter, they will basically play with that thermostat and tell them, tell us, what is the new target that we need to follow, but, but knowing where people are and what they're doing, we kind of don't want to go there.

This is not our business and we should not know about it. So, we try to stay very shy to that, even though it is a big factor of, of the eating and the cooling of a building, depending of how many people, think of a conference room, right? There's, let's say 30 persons stuck in a very small conference room, it's going to get very hot very quickly.

So, but then again, if these people, they ask for 70 [00:23:00] as a set point, we'll try to maintain that 70.

Bill: I guess you have to have some sort of boundaries within which to work. That makes sense. Yeah. And you've said. Or I came across while, while looking at your site, buildings use what, 38 percent of power total?

You, yeah, you mean like on the planet, right? Yeah.

Jean-Simon: Yeah.

Bill: So the building It's a large amount.

Jean-Simon: I was surprised. So it's funny because everybody think about transportation, right? They say like, Oh, you know, We need electrical car, we, what about the airline industry, uh, you know, big, big admission contributor, but we tend to forget about the fact that, you know, what building on our planet is one third of the total energy of the emission.

And, and we keep building more and more buildings, uh, the densification of large city. is happening at a very fast rate. Uh, there was, uh, some statistic done by, uh, ASHRAE, which is the, the association of, uh, of all of the building mechanical [00:24:00] engineer, um, worldwide. And they, um, they calculated that every month we're adding on our planet the equivalent of Manhattan in terms of new building.

Wow. And when you start to, and, you know, think about these, this kind of entire Manhattan Island. Yeah. Every month we're adding that to the existing building stack. You go like, wow, not only there's already tons of buildings, but we keep adding a lot every month. And these buildings for the next, you know, 30, 40 years will be consuming energy every day.

To maintain the desired temperature, so it's, it's a huge, huge block and we need to address it if we really want to, if we really want to have an impact on the planet. So yes, absolutely, you should, you should, you know, replace your window, redo the insulation of the roof and make it more energy efficient, but But these projects, they're very heavy.

They cost a lot of money. Um, and there's just not enough, uh, uh, people, worker, construction, people to do, uh, [00:25:00] all the building, even though you would, let's say you would have all of the money in the world to do it. You would not be able to, because you will be bottlenecked with the capacity of how many construction company could do it.

Um, so what's interesting with AI is you could. readily, very rapidly deploy AI in thousands and thousands of building without any bottleneck and get the first layer of 20, 25 percent energy reduction. And sometime to 40 percent emission reduction, kind of having an immediate impact at a very high scale ladder.

And then of course, after that, it will be okay when you have time, go change the window to get more

Bill: energy saving. And so many power grids are so stressed right now that just shaving a little bit off of their, Off of their requirements would be pretty huge. Yeah. That would be, that would make a huge, huge difference.

That's, that's awesome. Yeah.

Jean-Simon: It's interesting because you're, you're, you're bringing in, and I don't know if you're gonna bring in later on, but it's the, the, the, the opportunity to do clustering of building. And maybe we could talk about that later on. We, let's do, yeah, let's talk [00:26:00] about it now. Oh, okay. , I don't know what that means.

so. The discussion we have with the grid and, and it's funny because, you know, at Brainbox we started, let's, let's, let's deploy in one building, two building, three building, and now we're reaching, you know, 12, 000 building. So we're starting to have a density in some area where you could, you could basically do clustering and what, what does it mean to clustering?

Well, so the discussion we have with several grid operator is like, would you be able, if I tell you like, let's say a county, To aggregate all of the building from your different customer within that county and, and follow a behavior of consumption. And I'm going to give you, because if you have like 500 or a thousand building in that county and they all follow the same behavior together, like imagine a school of fish.

That at the grid, it will really, really help us because then you will be basically modulate megawatts, not kilowatts, but megawatts, because these 500 or [00:27:00] these 1000 building, all different customer, right, put together as a kind of virtual cluster would basically follow a pattern that is the grid would like to see, which will give a break to the grid.

Because the problem with the grid is everybody's very often peaking at the same time. So then they need to basically kick in extra power generation plant. And most of the time, these extra power generating plant are dirty. This could be a coal plant, it could be an oil plant. So what's happening is to go through the peak of the afternoon, they basically start generating very dirty energy sources to accommodate the peak.

But if you, if you could change the behavior of consumption of a large quantity of building geographic zone, they will probably not need to start that dirty power generation plant because they're shaving the peak. And that's the type of discussion we're having now and we're saying, okay, so there is some area where it's critical and some other area where you just don't care, [00:28:00] right?

So, so these kind of virtual clustering of building is definitely, I think, where the future is going in terms of optimizing not only the building at the building level, But optimizing the behavior of the building, so the grid could be optimized. So you're optimizing at the building level and you're optimizing at the grid level.

And with the emergency of all of the batteries and the charging station for electrical vehicle, that ecosystem for the grid is becoming very, very complex. Uh, what time do you eat the building or you cool down the building? At what time do you charge electrical vehicle? At what time should you charge that battery?

And what time should you discharge that battery and what time, you know, so suddenly it's becoming like a very complex equation that you need to optimize in real time. And, and AI is the perfect tool to give you that kind of assistance of how do you optimize that in real time.

Bill: Really getting in toward an actual legitimate smart grid, a real one where you're, where you're changing.

Decisions based on what the [00:29:00] grid can handle and the grid starts to bid in and out of, of different workloads. That would be, yeah, that would make a huge difference. And as I understand it, even those dirtier power systems take some time to spin up and spin down. So if they don't know if they're going to need them, they have to start spinning them up.

Even if they don't ultimately need them, so they're still creating some extra carbon and just being able to control so that they know they don't, they could leave them offline for a little bit longer, which would make a difference too. So to be even bigger, that's, that's pretty amazing. Yeah, exactly. And that's the

Jean-Simon: human nature, right?

So if we're not sure, We're going to basically get us the extra capacity because you're not sure. So you're going to take a safety, right? And we see that at the building level, right? So cooling capacity. So cooling capacity is, is very large building are usually done with chillers. Let's say you have four chillers, you run your chillers to produce these coolant.

Cold chilled water, which is pumped throughout the building. So imagine you have these chiller producing very chilled water, and the pump are [00:30:00] moving that chilled water into the entire building in case that your Air ling need that chilled water to produce cold air to basically the ac, right? So, so you're the engineer in the morning.

You're not too sure of what's going to be the quantity of chilled water you need to produce today to make sure that you're going to go through that peak, which is going to be somewhere around four ish in the afternoon, where it's going to be the hottest outside, tons of people in the building, it's going to be hard to reach the desired temperature.

So if you're not sure, let's say that you could get away with only three chiller today, you're going to start the fourth one. And you're going to produce a large quantity of chilled water just in case. Because alas, the worst thing that could happen to you is you're making the wrong call and people are too hot at 4 p.

m. in the afternoon. And then people will complain that you're doing a bad job. So as, you know, any human will go like, I don't want to receive this complaint. Let's start for a chiller just to be safe. So what's interesting with AI is we're [00:31:00] calculating with the prediction exactly what the load is. And, you know, put yourself a bit, a bit of safety if you want, but then you kind of realize that, no, no, no, you're fine with three.

Look at the prediction. It's solid. You could, you could can make the call. And then suddenly you're starting to save a lot of energy that day. You're not overproducing. So the same thing is happening at the grid. You're absolutely right. If you're the grid operator, the last thing you want is short of electricity, right?

And you have to basically shut down a neighborhood. And we've seen that in New York, right? They shut down an entire neighborhood. Because they were out of electricity, so sorry, we don't have enough for everybody, so we're shut down. From this street to this street, it's going to be off grid for two hours, and then we'll bring it back on, right?

So that's the worst thing that could happen. If you're a grid operator, this is your nightmare happening. So you're not too sure, we're going to start that coal plant, and we're going to produce just in case.

Bill: Yep. Voila. Um, you also mentioned using large language models. For this, which is, which [00:32:00] is interesting because training them takes a lot of power, running them takes a lot of power.

So you're using a lot of power to save a lot of power. And how do you find the, the balance point of that? How do you, how do you build an LLM that makes sense for this so that you're not ahead of the game?

Jean-Simon: So that's a, uh, it's another fascinating subject, right? So, so far with pretty much all the consumers seen from LLM, generic AI, is the super large language that could answer any question you may have.

We're covering, we're trying to cover an entire spectrum of knowledge of the humankind. One model, imagine that. But these models, I mean, they got like the entire knowledge of humankind and ask them any question, they will try to get you an answer. Sometimes they're not accurate, but they will try to get you an answer.

So this is, I think, great for flashing into climbing that you've done it. But on the business side, it's very rare that That's what you need. When you look at the different [00:33:00] business problem that we're trying to solve, let's, let's just take the example of a chemical plant. If you're operating a chemical plant, you, you want to know about everything there is to know about chemical plant and how to operate a chemical plant and how to optimize a chemical plant, how to operate safety at a chemical plant.

And really your interest in It's in the sandbox, which is just a chemical plant. So why dealing with a model which could tell you what is the best cheesecake recipe? You only want to need a model which is an expert on chemical plants. So it's a bit the same thing we've done on our side. We said like, really, the only thing we care about is building.

Building operation, building mechanic, building energy, you know. Which is a very limited sandbox of knowledge that usually the engineer are drilled at the university. And when they're, when they're studying the field of mechanical building and then when they start working, they learn more in that specific sandbox.

So that's exactly what we've done is we, we targeted that sandbox and we said, like, [00:34:00] we only need a LLM, which is specialized in that knowledge sandbox. And then suddenly. You have a very small model that could be trained on a very small amount of computing power and energy and is specialized in one field.

So, so that virtual building a system that we call ARIA, which is now entering in beta user testing phase. If you ask ARIA, what is the best cheesecake recipe? Aria will very politely decline to say, I don't know about cheesecake. I know my specialty is building operation, building mechanic, building energy.

Feel free to ask me a question about that topic. But unfortunately, I'm not a generalized model. And then, I think that's the future. I think what we're going to start to see now on the business side, you're going to start to see a lot of smaller, very small model, which are specialized on the vertical topic.

Think of accounting or legal. So you're going to have these [00:35:00] model, which are very specialized in accounting or in legal topic, and they will not know about other topic, but that's okay because they're going to be used in a, in a fashion, which is. Their role is to help legal people or accounting people do their, their job.

So why a large language model as opposed to a standard AI model? Well it's the very interesting revolution that happened with LLM. It's not so much that these model know everything, it is the capability of these model to understand what you're asking. So to decode your, your sentence that you wrote, or if you're talking what you're saying to basically translate that information from a discussion with the human into different building block of what is the person wants and then, and then craft a plan of the answer.

And once you gather the information, which is basically is needed to reply back, is to [00:36:00] then put this information together in a very nice sentence, which is going to be served to the human in a way that the human will be really have the feeling that he's having a discussion, that he's having a A relationship with a virtual building assistant, which is engaging in the sense that you could basically have a discussion and starting to actually build trust.

So that's really what the LLM revolution was for us, is this kind of feeling that even though you're talking to like a chatbot, a machine. Yep. You're having the feeling that you're talking to somebody which is not a human, but kind on half human, right? And it's, and it's, and it's helped you to basically have a discussion.

So I think we all did it, right? So, so you're having the, the first thing you realize it's been 20 minutes and you're kind of enjoying that discussion. Because the LLM model is, is serving you with. With these way of talking that the human does, right? So you're, you're having that [00:37:00] kind of a discussion level.

And this is making our life much easier to serve customer with the information, because we do have like an incredible amount of information. When you look at all the databases we have about your building, the question is, how can we Give you back all that information we have with your own building without giving you a dashboard with 50 tab and a 60 graph.

And because what's happening is when you overwhelm the human with these huge dashboard, they basically like are looking at all these graph and they're like, honestly, I, I know. More green than red. Okay. I guess it's good. I don't know. What should I look at? Right. There's too much. So the question adding that assistant is, is you could basically tell the assistant, okay.

Is everything all right? Is there anything that I should know about my 200 buildings that I'm supposed to supervise? And that assistant say, well I did check everything and everything is all right except these 5 buildings. Where here's what I [00:38:00] discovered with these 5 buildings and really I don't like what I see.

What do you think? So suddenly, It's transforming completely that job. So you're kind of reproducing that funnel of what should I worry about right now? And if there's nothing, well, thank you very much. Right? So it's, it's, it's that kind of relationship in between huge data to a poor human that has to figure out, I'm not going to read all that data.

Can you just

Bill: tell me if everything is all right? So the part that makes me laugh about that is what you were describing, what I was, what I was hearing you describing is you're using an LLM. So that people can have a natural conversation with the AI, which translated through my head says with the regular AI, you have to know prompt engineering.

You have to understand what you're asking for and ask for it in the right way. And everyone's set up on LLMs. Now we're going to have prompt engineer as a job, but the purpose of the LLM is so you don't have to be a prompt engineer. You can just ask a question and get an answer that you don't really understand, which is It's kind of funny.

We're using an LLM. So you don't have to prompt engineer. I [00:39:00] think that's going to be much more of a trend in the future. Why would we have to do that?

Jean-Simon: Yeah. I think it's a, like, since we started brain box, people are saying like, I would like to see the AI. I would like to, you know, to talk to the AI and say, well, it's a neural network.

Like it's a mathematic formula. I mean, what, what should I show you? What, you know, and it's kind of that you're exactly right. It's like, like, how do you show AI at work to a customer? To a user, and honestly, like, we were like, sometime, I think, we were making jokes, and we're going to make this kind of a holographic icon that's kind of changed color, and that's the AI, I think.

Bill: Like a robot that just goes around the control room and does nothing.

Jean-Simon: Like, there

Bill: it

Jean-Simon: is. There's the AI. It's, it's doing things. So. Yeah. Okay. But then the LLM came and they said, okay, wait, wait, wait, wait, wait, and that's the, that's why we kind of made a parallel saying like, well, really we're doing exactly Jarvis and Ironman.

So how do you interact with that super intelligent computer? And suddenly Jarvis become the face, become your portal [00:40:00] where you're having that interaction with that super intelligent computer and it become Jarvis, but Jarvis is just the front end, right? It's just like a way to communicate. The intelligence is what you want, but.

You can't see it, you can't talk to it, so it's really become your kind of a portal to interact in the face of the AI. So yeah, that's why actually we were quite, quite happy with that saying, Yeah, we're going to use LLM, but in not necessarily the same way that people see it right now or use

Bill: it

Jean-Simon: right

Bill: now.

So you have to give it a name. So ARIA is your Jarvis. Yeah, exactly. John Simone, thank you so much for the time. We are at our max right now, so we should probably move along and get on with our days and let our listeners get back to theirs. But hopefully everyone learned something and had a little bit of fun along the way.

So moving forward, how do people keep following what you're up to and find out what Ari is up to and keep up with the latest from Bradebox AI? I think the

Jean-Simon: best way is definitely to go on our website, brainboxai. com, all the [00:41:00] latest and publications. Actually, we do publish a lot of positioning paper and white paper that help people understand where the industry is going and why technology should be used and how to use the technology.

Fantastic. Thank you so much for the time. My pleasure, Bill.

Ad read: That does it for this episode of Over the Edge. If you're enjoying the show, please leave a rating and a review and tell a friend. Over the Edge is made possible through the generous support Sponsorship of our partners at Dell Technologies.

Simplify your edge so you can generate more value. Learn more by visiting dell. com slash edge.