How will we feed the growing population while making agriculture more sustainable? In this episode, Bill sits down with Vikram Adve, to discuss. Vikram is an Illinois Grainger Engineering professor in the Siebel School of Computing and Data Science and Director of the AIFARMS National AI Institute for Agriculture.
How will we feed the growing population while making agriculture more sustainable?
In this episode, Bill sits down with Vikram Adve, to discuss. Vikram is an Illinois Grainger Engineering professor in the Siebel School of Computing and Data Science and Director of the AIFARMS National AI Institute for Agriculture.
Vikram highlights the growing importance of digital agriculture, and how data and technology will play crucial roles in addressing global food production challenges, sustainability, and climate resilience. He also discusses AI's pivotal role and offers insights into optimizing machine learning at the edge.
---------
“The sustainability and the environmental constraints that we have to impose to increase productivity is a very, very difficult set of challenges. And so technology is becoming increasingly important in order to meet those challenges."
“All of these technologies are becoming important because agriculture faces some major challenges worldwide. Probably the biggest and simplest one to understand is that we have to increase food production over the next 25 to 30 years by as much as probably 60 percent.”
--------
Timestamps:
(01:30) How Vikram got started in tech
(03:37) Defining digital agriculture
(07:14) How does AI impact sustainability?
(19:51) Optimizing machine learning in agriculture
(31:30) Near data processing
(37:34) The future of data processing at the edge
--------
Over the Edge is brought to you by Dell Technologies to unlock the potential of your infrastructure with edge solutions. From hardware and software to data and operations, across your entire multi-cloud environment, we’re here to help you simplify your edge so you can generate more value. Learn more by visiting dell.com/edge for more information or click on the link in the show notes.
--------
Over the Edge is hosted by Bill Pfeifer, and was created by Matt Trifiro and Ian Faison. Executive producers are Matt Trifiro, Ian Faison, Jon Libbey and Kyle Rusca. The show producer is Erin Stenhouse. The audio engineer is Brian Thomas. Additional production support from Elisabeth Plutko.
--------
Follow Bill on LinkedIn
Follow Vikram on LinkedIn
Edge Solutions | Dell Technologies
Producer: [00:00:00] Hello and welcome to Over the Edge. This episode features an interview between Bill Pfeiffer and Vikram Adve, professor of computer science at the University of Illinois and director of the AI Farms National AI Institute for Agriculture. Among other Edge topics, one of the primary areas of Vikram's current research is machine learning techniques and tools for digital agriculture.
In this conversation, Vikram highlights the growing importance of digital agriculture and how data and technology will play crucial roles in addressing sustainability and global food production challenges and increasing climate resilience. He also discusses AI's impact and offers insights into optimizing machine learning at the edge.
But before we get into it, here's a brief word from our sponsor.
Ad: Edge solutions are unlocking data driven insights for leading organizations. With Dell Technologies, you can capitalize on your edge by leveraging the broadest portfolio of purpose built edge [00:01:00] hardware, software, and services. Leverage AI where you need it.
Simplify your edge and protect your edge to generate competitive advantage within your industry. Capitalize on your edge today with Dell Technologies.
Producer: And now, please enjoy this interview between Bill Pfeiffer and Vikram Adve, Professor of Computer Science at the University of Illinois, Texas A& M. and Director of the AI Farms National AI Institute for Agriculture.
Bill: Vikram, thanks so much for joining us today. This should be a really fun conversation. You've got so many things going on. You're juggling a lot of different stuff and I am looking forward to digging into that conversation.
Vikram: Me too. I'm excited to be here. Thank you for inviting me
Bill: to speak on
Vikram: this.
Bill: Absolutely. So, so let's start with a little bit of background. The standard opening question, how did you get started in technology? What brought you here? So
Vikram: I really have to go back all the way to college. I, in high school, applied and got admitted into the [00:02:00] Indian Institute of Technology, Bombay. And I was always interested in, in science and to some extent, mathematics, but more physics and chemistry and really not biology at all.
So I pretty much knew I wanted to go into something related to physics or engineering. And so I did an undergraduate degree in electrical engineering at IIT Bombay. And then, and then I think I more or less have gone with the flow as far as an academic career is concerned. So I, I did a Ph. D. in computer science at the University of Wisconsin and then worked for several years at Trinity University before joining the faculty position I have now at the University of Illinois.
Bill: And you said you weren't really interested in biology, but you're kind of doing a fair bit of that anyway, as it turns out.
Vikram: That's true, yes, although I, I always have the hardest time in my current work when biology topics come up. I mean, I enjoy it a lot. I enjoy learning about things, but it's [00:03:00] definitely the least expertise.
Bill: Right. Well, and it makes sense. I mean, you're still doing more of the computer science, more of the engineering side of it. So for background, right, you're heavily focused on digital agriculture. Yeah. Which is an interesting term. I want to get to that in just a moment. You co founded and co lead the Center for Digital Agriculture at the University of Illinois at Urbana Champaign, and you lead the AI Farms Institute, which is one of the national AI research institutes, which is a pretty impressive list of stuff.
And that's not the full list of stuff that you do and that you're working on because you're awesome. So in your words. How would you define digital agriculture? What is that?
Vikram: So rather than defining it, I think I would describe what it includes. It's a very broad term, but broadly speaking, any aspect of agriculture that makes use of data could be considered digital agriculture.
What's important about it though, [00:04:00] is that there are a number of computing and engineering technologies. that are beginning to play an increasingly important role in agriculture. Agriculture is becoming more and more driven by data and sensor technologies and automation, leading to increasing levels of autonomy and now increasingly also artificial intelligence.
And all of these technologies are becoming important because agriculture face some faces some major challenges worldwide. Probably the biggest and simplest one to understand is that we have to increase food production over the next 25 to 30 years by as much as probably 60 percent. Wow. And increase the mix because as people get more prosperous, they eat more protein, they have a changing, they have changing diets, they want more natural foods, more organic foods, but at the same time, we are not increasing the amount of land available.
If anything, the land available is [00:05:00] shrinking. The population working on agriculture is shrinking because of migration into, into, into urban areas and changing. Professional interest, but really the aging of the farm population is a huge demographic challenge in agriculture. The average age of the farming population in the U.
S. is around 57, and it's been increasing steadily. And younger generations are not going into farming. And beyond that, there are major constraints and, and challenges because of changing weather, changing climate, just increasing frequency of extreme events, heat, drought, rainfall, wind events. It's just. All causing significant impacts on agricultural productivity.
Bill: I guess that makes it a lot riskier.
Vikram: Makes it a lot riskier. A profession that's already risky because you really are having to invest heavily at the beginning of the year and you don't see any returns for more than a year afterwards as a farmer. So it's already a very risky proposition. And on [00:06:00] top of that, we cannot afford to use the kinds of techniques that have been used in the past So many people have heard of the so called green revolution, which is when we dramatically increased food production in the middle of the middle to later parts of the 20th century.
Bill: But
Vikram: that came at tremendous cost to the environment in terms of water, water resources, environmental pollution, health impacts, terrible soil erosion, real decrease in soil and all around the world. And it dramatically increased productivity, but we can't have. afford these kinds of negative impacts anymore.
And so the sustainability and the environmental constraints that we have to impose to increase productivity is a very, very difficult set of challenges. And so technology is becoming increasingly important in order to meet those challenges. And so digital agriculture really is a very broad part of modern agriculture today.
Bill: And [00:07:00] increasing, increasing the amount of food, but still making it more sustainable is a thing you've called sustainable intensification, which is a term that I love that just, it just sounds awesome. And it summarizes it perfectly. How is that, how does AI, how is AI required in sustainable intensification?
And what does that do specifically that, that we otherwise can't do?
Vikram: Yeah. So first, just to be clear, AI is one piece of a broader technology puzzle. I think people sometimes these days, because of the hype around AI, tend to weigh more into imagining what AI can do and miss the bigger picture. There's a lot of technology and sensors and data and cloud computing and, and edge computing and many other aspects that actually enable some of the advances in AI to be successful as well.
AI specifically also is a broad term that encompasses [00:08:00] many things. The major successes in AI in the last few years have been in areas like computer vision, in natural language processing, in generative AI, in machine learning across heterogeneous data sets. And that's one of the major aspects of agriculture that there's so much diversity
Bill: in
Vikram: terms of the species of crops and seeds and pests and growing conditions and management practices and local environmental.
Constraints and so being able to analyze heterogeneous data is a particularly important challenge and machine learning techniques are playing a really important role in enabling those as well. And so AI has a number of specific application areas today that are starting to see commercial products being rolled out.
But I think that. AI is still very much in its infancy in terms of the [00:09:00] capabilities being really used for advancing goals in agriculture. But if I can give a couple of examples of, of important technologies today, so recently John Deere came out with a system called C and Spray, which uses computer vision and machine learning to identify weeds and do very, very selective spraying to destroy weeds in fields.
Which greatly reduces the amount of chemical herbicides that are needed, which dramatically brings down pollution and harmful effects on human health. And another example is that companies that breed seeds are increasingly using computer vision, machine learning, and robotics to speed up the process of evaluating different seed hybrids.
They have to actually evaluate many different seed hybrids under many different growing conditions so that they really end up with [00:10:00] thousands or tens of thousands of different trials. And they do this every year to be able to improve seed quality. Quality to improve drought resilience, climate resilience.
This is a constant process of improvement that seed companies go through. And technologies and AI in particular are playing a really important role in both speeding up that process, making it faster and cheaper. And maybe one more example from the livestock world. So increasingly in the livestock world, for example, automation is being used to in dairy operations, for example, for milking cattle.
More and more. They're actually benefiting cows that might need to be milked at certain times of the day, at certain intervals, because now cows can actually go up directly to an automated system, and there's a simple path they have to follow in a shed where they can walk in, and then they are guided to the machines that do the milking, and it's more or less all completely [00:11:00] automated.
Wow. human supervision. And so this is not just reducing the labor requirements in livestock, which is a major challenge in livestock production today is lack of availability of labor, but it's also actually improving animal welfare because the cows are just better treated in these kinds of situations.
Bill: And they can just go milk themselves when they're ready. They can basically tell a machine milk me.
So So all of that stuff requires a fair bit of technology, right? AI or otherwise, data storage, data movement, generation processing, all of that stuff is fairly power intensive. In some industries, that power intensity is the major driver of sustainability improvements. Is that true in digital agriculture, or is that more about spraying too many chemicals and, you know, greenhouse gases created by [00:12:00] livestock and things like that?
Is it, is that energy one of your major considerations, or is it just kind of a secondary thing?
Vikram: It's not secondary. Energy is an important consideration, but there are many other important ones as well. So I wouldn't put energy at the top for sure. The way energy comes in especially is, for example, nitrate based fertilizers, which are very widely used, they need an extremely energy intensive process called the Haber Dosch process.
And basically taking nitrogen from the air and generating nitrate fertilizer is a very, very energy intensive process. And so that's one major area of trying to improve energy efficiency. All kinds of agricultural equipment uses fossil fuels today. That does lead to significant amounts of pollution. So that is another important energy usage area, but really there are much more pervasive and perhaps even important problems [00:13:00] with sustainability and environmental impacts, including methane emissions from agricultural feedlots, including CO2 emissions.
Environmental impacts from, from monoculture farming. So, the way that, that modern agriculture has evolved, especially in more developed countries, is to have very large farms growing one or two crops on, on huge fields with very large equipment. And it's almost like a, like a very strong feedback loop that you need larger equipment for larger farms.
You can only do larger farms. And that gives you great efficiency gains, but the larger farms cannot be operated without the large equipment. And so you have this very strong feedback loop that has created increasing farm sizes and increasing equipment sizes. But because of that, we have this monoculture impact.
So monoculture farming is the term that's, that's widely used for this. It causes [00:14:00] Significant harm to the bacterial life in the soils because you have just one crop type and very limited kinds of weeds that will grow around that. And so the microbiome of the soil, which is crucial to agricultural productivity, is greatly narrowed and health and nature of the soil becomes impacted because of that, the productivity of the soil.
Soil erosion is another major problem that very often we leave fields. Basically, I'm cultivated as a fallow for months, like in the summer season. And soil erosion has been a major problem because of that. And a number of other, and then fertilizer overuse causes runoff into waterways, which has a number of impacts both on human health and on aquatic fisheries.
In fact, you have what, What's called a dead zone in the Gulf of Mexico and the Mississippi river that at one point was larger than the area of Rhode Island, which [00:15:00] had severely depleted aquatic life. And that is impacting fisheries in the Gulf states like Louisiana and Mississippi because of depletion of aquatic life.
And so these are just. Among the biggest impacts that are others as well of environmental impacts of agriculture.
Bill: Interesting. In general, we spend so much time thinking about power that's used in all of this technology. And, you know, that tends to be from a data center perspective and, you know, a large business perspective, that the density of it, that becomes one of the, Effectively the lower hanging fruit, right?
If you want to make it, if you want to make AI more sustainable, make it use less power. But for, for digital agriculture, it makes sense that it's so energy intensive to make those chemicals and so damaging to use those chemicals and over watering and soil erosion. And if you can prevent all of those things, the energy used by the AI is really secondary.
That's, that's [00:16:00] an interesting, it's a very different way of looking at it. That's, that's a great point of view. Great perspective. Thank you.
Vikram: Yeah, but at the same time, because agriculture is such an enormous domain, even the secondary effects are still important. And so I would expect that as AI becomes more widely used, the energy impacts of AI will start to become more significant.
Bill: Yeah, certainly not something we can ignore.
Vikram: Exactly. Yeah. I mean, worldwide AI is a tremendous user of energy, and this is a huge problem. Agriculture is just, I think, starting to make an identity.
Bill: Right. Well, and that's, that's not really the low hanging fruit for agriculture there. We should use more AI if we can.
to help fix some of the other stuff. And later on, we can worry about how much power that's using.
Vikram: Yeah, we're very far away from a point where the energy impacts of AI and other downsides of AI are the limiting factor in deploying AI. We [00:17:00] still have a long way to go before we get to that point. It's
Bill: interesting.
We've talked a couple of times on this podcast about indoor farming and how sensors and IoT and modern methodologies are helping make that more effective, more efficient, more cost effective. But we haven't talked about outdoor agriculture, and it's a very different story because you're at the mercy of the weather and you can't control the oxygen and carbon dioxide levels of the air and the heat and stuff like that.
And boy, it's just, yeah, I mean, putting sensors all over acres and acres and acres of fields versus going vertical and having some cameras pointed at it. Very different conversation. And it's a whole different ballgame there.
Vikram: Yeah. And in fact, the kinds of sensors and, and other technologies that get used outdoors tend to be those that can really scale efficiently to very large scale [00:18:00] farms and regions.
And so probably the most widely deployed ones these days are remote sensing based techniques that use satellite data or even sometimes aircraft or drones to collect images and hyperspectral information about the health of crops. And then use that to predict yields or to make other kinds of farming decisions.
Computer vision is another technology that's becoming increasingly used because it can be used without having to install sensors in the fields. But you can carry your camera around, whether it's on a drone or on a tractor. Or on small robots and increasingly people are looking at using small robots for smaller, medium sized robots for gathering data in the fields, but also to perform specific tasks during agriculture.
So computer vision is another sensing technology that I think can get much more scalable than others. You don't have to have in situ sensors.
Bill: [00:19:00] Okay. So jumping tracks a little bit. We've been talking about digital farming so far, but switching over more to the technology, the engineering and the computer science, more your happy space.
You've done a lot of work on optimizing machine learning for the edge. And again, you know, not so much focused on the power requirements, but we still, you know, if you're running tons of AI or it's inefficient, then you have bigger processors and heavier equipment and all sorts of other challenges in addition to drawing more power, which is still, it's a consideration.
So we still. Need to pay attention to how optimized the machine learning is, especially when you're talking about smaller devices, sensors, mobile devices, things like that. So how do you optimize machine learning in digital agriculture or just for the edge in general?
Vikram: So I should start before my work, because there's been people working on optimizing machine learning for the edge for quite a while.[00:20:00]
And so our work is more recent, obviously, in that space. So for example, in the early days, a lot of the work on optimizing machine learning, Involved first designing machine learning models that are smaller, just in terms of the structure of the model. They're using fewer parameters in order to make them more efficient to be used at the edge.
And so models were tailored for edge devices. And for edge computing. And in fact, there was a whole line of work on what's called neural architecture search, which is basically searching through a space of designs to look for architectures, which is, which is just the structure of a neural network to look for architectures that are efficient and small and get the best possible accuracy within that size.
constraint. And then another important line of work is model compression. So there's been quite a few successful pieces of work or advances in [00:21:00] essentially taking what could be a very large model with hundreds of millions of parameters or many tens of millions of parameters and identifying the most important kind of nodes of that neural network, the most important Parts of the neural network for getting accuracy for the particular task the network is designed for.
And that's called model pruning. So it's basically pruning the model down to keep only the parameters that are important while preserving accuracy and retraining the model as you shrink it down. And so there's been a very successful line of research on model pruning, which reduces the size of the model while trying to maintain accuracy.
And there are other techniques like basically using fewer bits of precision to represent numbers. So instead of representing number as a 32 word floating point number, you might represent it as a 16 bit floating point number, or 16 or 8 bit integer, or even 4 bit integer in some cases, [00:22:00] which While still maintaining most of the accuracy that you would have had if you would use the high precision representation.
And this technique is sometimes called quantization, but basically what you're doing is you're quantizing values into representing them with fewer bits, which greatly reduces the amount of arithmetic calculations that you need to perform and makes models much more efficient for the edge. And, uh. The line of work that our group has been pursuing, which further improves on some of these techniques, is that if you know how the model is being used, rather than optimize the model in isolation by itself.
So, for example, if you have a computer vision model that is recognizing objects in images and you have optimized it to while preserving the accuracy of that model as much as possible. What we have shown is that when you use that model in the actual application, [00:23:00] it may actually be all right to relax that accuracy a little bit further because the application has some specific goals in mind.
There's some, there's a task the application is performing. And so one example I can give is you have robots navigating between rows of crops in your field and you actually, and we're using computer vision to guide that navigation. So you're detecting the rows on the side. Um, and using that to, to, uh, chart a path for the robot, the robot doesn't actually care about getting the highest possible accuracy in detecting the rows of crops.
We don't need the maximum accuracy out of the neural network. We can relax that accuracy as long as we don't lose the accuracy of the navigation itself. So we don't go too far off the center line between the two rows that we're navigating. And so by feeding that information back, which is the application quality metric, if you think about it that way, that's That application quality metric is what you really care about, and now you can further optimize the machine learning models, the neural [00:24:00] network, much more aggressively.
And so, for example, we can get a factor of five additional improvement over just the, you know, optimizing the models in isolation. So, we're 5x faster because we know that we don't need the full accuracy. of the original model. And that's just a general idea. What we're really trying to do is to build a customizable system that can use this and apply these optimizations for a wide range of different applications that may have very, very different kinds of quality metrics.
So just as an example, a completely different use case for this might be in modern augmented reality and virtual reality applications, which is another really important and very ambitious Use case for edge computing. You use a number of different neural networks and other computational algorithms for virtual reality.
Or for [00:25:00] augmented reality. And there, the quality metrics will be very different because the application goals are something related to whatever the virtual reality headset or the augmented reality application needs to be able to maintain as a quality metric. And so there, we want to be able to optimize the neural networks in a similar way, but we want to take a very different kind of quality metric and feed it back into this Optimization system in doing that optimization.
And so we're trying to make this be as widely usable as possible in a number of different edge computing domain application domains by taking defining a set of requirements for what the quality metrics can be and how they are used to optimize the application.
Bill: It's a really interesting point about the accuracy.
I guess when you're steering a robot through lines of crops, you just need to [00:26:00] not hit stuff. You don't really need to carefully identify what that stuff is. Look a piece of corn. Look a piece of corn. Look a piece of corn. It's all corn.
Vikram: Just don't hit it. That's exactly right. Yeah.
Bill: And it's kind of an interesting, I hadn't thought about that, but right, even you were talking about John Deere doing selective spraying of weeds and you don't necessarily have to exactly identify what the weed is.
If it's not corn, it's a weed. If it's not whatever it is that you're growing, you know, you can feed that in and identify that and everything else is just not that.
Vikram: Exactly. So identifying not corn or not soybean or not whatever is the primary task there for the computer vision. There are more advanced applications where you want to actually keep some weeds and not kill all of them.
I don't think Cnspray does that yet. I'm not sure anybody does that yet, actually. But I know weed scientists talk about this. There are beneficial weeds and harmful weeds. And so [00:27:00] when we ever get to that point of really being able to optimize the technology further, we might actually want to take that into account as well.
But today it's just find the weeds and kill
Bill: them. That is the first time I've heard the term weed scientist. My wife is an herbalist and I get in trouble all the time for mowing the lawn. Oh no, did you mow the thing? Like the thing that looked like a weed? Yes, I did. Cause it was in the middle of the log.
No, that wasn't. Okay. Yes. So yeah, I'll have to take that back to her and she can say, see.
Vikram: And actually the surprising thing there is that weeds can be beneficial because I think one of the things that people don't quite appreciate is just how complex An ecosystem, the field is where crops grow and there's so many different organisms in the soil that are important for the health of the plant and maintaining the richness of those organisms and this is earthworms, it's, it's [00:28:00] nematodes, there are a few beneficial nematodes that are many harmful nematodes.
It's all kinds of micro microbes of bacteria that are beneficial for the soil and some that are harmful. And maintaining the, that kind of balance can be really important, but you need a diversity of plants
Bill: in the
Vikram: field to maintain the health of the microbiome and the other organisms in the soil. And so that's why they can be beneficial because they can sustain life.
Other kinds of
Bill: Wow. That's yeah. So then that just takes us back to The whole monoculture farming thing where you're putting one kind of thing in a field and nothing but and then you're trying to kill everything that isn't that thing, but not identifying the good versus the bad. And wow, that just becomes.
It makes that conversation so much more complex of what constitutes, do we know what even would constitute a healthy biome for something like that when you're [00:29:00] doing within the bounds of monoculture farming, right? Even because that's primarily what we're doing. Ideally we wouldn't, but.
Vikram: Well, we know a lot.
There's I'm sure a lot more still to study, but there's a, soil science is a very important and large community of researchers. that specialize in understanding these kinds of behaviors of the soil. And so for example, in the AI Farms AI Institute, we have a thrust on soil health, which is basically soil scientists and AI people working together to make predictions about how nutrients flow in the soil, how we can impact the environment, about the microbiome of the soil.
So we're actually building a novel dataset about The microbes in the soil in the context of yields and other agricultural data. And so understanding and studying the soil is a really important area of research.
Bill: Yeah, it's how we all get food, kind of an important thing. [00:30:00] And especially as we get more people and have more climate challenges.
Ooh, okay. Climate resilience is another term that you used earlier. That's, that's one that I really like. That's, that resonates. It makes a lot of sense.
Vikram: Yeah. So climate resilience is basically trying to develop crops that can survive and not only survive, but thrive and give good. Yields give good productivity even as we get increasing droughts.
So you might have water shortages and you need crops that are more resilient to water shortages. You might have elevated levels of carbon dioxide and you want crops that can really maximize their output in the presence of elevated levels of carbon dioxide or ozone. And so this tends to be, especially in the area of breeding, Seed breeding, where you're developing seed varieties that have better water use efficiency, nitrogen use efficiency, carbon dioxide [00:31:00] efficiency or productivity.
And so there's a, there's a rich area of research on this whole broad area of climate resilience. Some of the leading researchers in that area, I should say, actually work right here at the University of Illinois. They have been And some of them are part of AI farm, some of them are not, but they've been working on this topic for, for a very long time and really dramatic advances in the resilience of crops to climate and understanding of how to make them resilient.
Bill: I love this conversation. There's so much that I've never thought about inside of, inside of the use cases of, of what we're talking about here. This is great stuff. So. You've also done some work, again, back to the engineering and the computer science side of it, on getting compute, memory, and storage closer together.
That's usually a data center conversation, right? Not moving data around the data center or between data centers, but at the edge, it's also, Immensely important. You can get faster responses and use less [00:32:00] power and have a smaller footprint package and things like that. Can you talk some about what you're doing in that space and how?
Vikram: Sure. So this is, this area is often called Near data processing. It's trying to essentially, instead of having to bring data from main memory, which is where data is stored when a program runs, all the way to the processor, which goes through a connection network and goes through smaller local memories closer to the processor called caches, and then gets into the processor itself for, for, for actual processing.
Compute operations, and then the results are sent all the way back up the chain, back to the main memory where it's stored. That is a lot of data movement, and much of the energy usage in data centers today is dominated by this data movement. And so, reducing that, that [00:33:00] energy. Cost is becoming an increasingly important problem, especially as the volumes of data have grown dramatically in the last year or the last couple of decades, and it's just continuing to grow at a breathtaking pace.
And so, because of the growth of the size of data, we really need to try to reduce the amount of data movement. And so this is, I should say, a wide open area. There are virtually no commercial products that actually move the computing in this way. There are a number of difficult engineering and technical challenges in moving the processing elements.
out of the processor and closer to the different memories within the system. And an additional problem is that in fact, so Micron Tech, which is a major memory manufacturer, is one of the sponsors part as part of the Semiconductor Research Corporation. A whole research center around [00:34:00] this whole area of work called the PRISM Center.
It's led by the University of San Diego. Diana Rosing, Professor Diana Rosing at San Diego is the head of that center. And there are a number of companies from Seminole Research Corporation that sponsor that work. Micron Tech is one of the sponsors and, and somebody from Micron said to me that there have been so many different research proposals for different ways to design these processors to bring the, bring them closer to the memories.
The problem, and they know how to manufacture all of them. So there is not a fundamental technology challenge in creating these. These designs, the real challenge is how to get them adopted in the real world without forcing people to rewrite all their software to make use of these processors, because the processes are going to be quite different in terms of the operations they perform and in terms of what data movement they need compared with just traditional processors living in the central processing unit of the CPU.
And so [00:35:00] that programming Challenge is actually one of the major gating concerns in being able to deploy these technologies. On the edge, you mentioned that these can be important also. It turns out on the edge, there isn't as much scope for moving processing closer to the actual memory units. A much more dramatic and much more important strategy that has some of the same impact is what are called Semiconductors on chip, or SOCs, so modern, all modern edge computing devices, except the really very, very, very weak ones, really live on SOCs.
So a cell phone is using a very, very rich processor that really runs a lot of different specialized computing. Capabilities. And what, what, by integrating them on a single chip, you're basically moving a lot of different compute capabilities. So different kinds of processors all onto a single chip [00:36:00] and integrating the memory system, a lot of the memory system, at least onto that chip, at least the local memory.
And also now increasingly multi chip modules are putting multiple chips into the same package. which further reduces the energy requirements of moving data between chips. And so these kinds of technologies are reducing the cost of data movement, but at the scale of the edge, where you have more compact devices, more energy efficient devices, smaller form factors, smaller size, weight, and power constraints.
And within that, Both semiconductor launch of SOCs and multi chip modules are having some of the same benefits.
Bill: Interesting. So, do you see AI at the edge, digital processing at the edge of any kind, becoming, I mean there's, there's talk, again, mostly data center type stuff about All the different types [00:37:00] of accelerators.
We have GPUs, we have NPUs, we have DPUs, we have ASICs, we have, you know, computer vision specialized chips and things like that. Do you see that coming out to the edge or do you think there's enough processing power in just a generalized CPU or a system on chip? You know, in some cases, just a Raspberry Pi, like a little ARM processor, that's doing everything all in one, super low power on a battery.
Is that enough for most of the use cases? Or do you think we need to get more toward, or you think we will, I guess, get more toward bringing in those specialized chips and specialized methodologies?
Vikram: Yeah. So I think edge computing today, if you include cell phones, you include AR, VR headsets. You include many smart cameras and other kinds of applications.
They are all possible only because of SoCs. They really wouldn't even have been feasible without this kind of integration. The level of [00:38:00] integration has been increasing dramatically over the few generations. But if you take an iPhone or an equivalent Android phone today. The main processor on that phone already is a really rich SOC.
It's in fact, among the richest kinds of integration with often nowadays, 40, 50, maybe even 60 different kinds of processors of very different capabilities. So there'll be a main processor with many, many cores in it, a GPU with many cores in it, but nowadays an AI engine, image processing, sound processing, and then lots of little specialized devices.
For many other kinds of monitoring, like, for example, the inertial motion unit or all kinds of other sensors, all require processing capabilities, and they're all integrated into this one chip, or perhaps a small multi chip package. And so, SoCs are pervasive today. The whole world actually is only, our modern world lives on SoCs.
Areas [00:39:00] where I think they haven't had as much of an impact or at least they're very limited or more specialized applications. So for example, if you take a edge device from NVIDIA, like a Jetson Xavier or a Jetson Nano, those are also SOCs, but they typically just have a one processor and one GPU. And in some cases also one AI, custom AI accelerator.
And those are integrated into a single SOC. So you have a limited amount of integration into a single processing unit. But those are kind of the mid range to high end edge computing devices that are being used in many different applications. The Raspberry Pi scale of the world is really only for very, very cheap sensors at the low end.
They're common and widely used because the number of them is huge, but the Level of data, the amount of data and the kind of applications that they're used for [00:40:00] are very narrow and limited to very specific kinds of sensors. But the vast majority of richer edge computing applications are really possible only because of this, because of rich sensors.
And they're growing. So, so if you take a major, an area that could have a lot of potential growth in the future, like, like virtual reality and augmented reality, those headsets, Have the same level of technology needs, I think, as cell phones, if not even greater. Because the human interaction has to be in real time, and virtual reality headsets are very difficult to design because of that.
In fact, it's one of the major limitations. The size, weight, and power of these things, yeah, and the bulk, and the size, and the discomfort that people face from using even the best ones today. They just need significant improvements in the level of integration, the amount of. The chip [00:41:00] capabilities and the level of integration and the speed of the processing of that data.
So all of these areas, and so robotics is another one. Robots are becoming incredibly powerful these days and they are just proliferating in terms of the use cases. Many people fear them as a way because they might replace jobs or they might even cause more harm. But really, I think that's more of a misunderstanding than anything else.
Most robots, virtually all robots, In the real world, in the foreseeable future, are going to be used to increase human capabilities, not to replace human capabilities. So for example, a doctor performing microscopic surgery would not be able to do it without the help of a robotic system that is really using technology to perform that kind of operation.
Or a person living in a rural area where they don't have advanced [00:42:00] special medical specialties. May not be able to get the kind of health care they need without telemedicine and telesurgery, where you have remotely controlled equipment being able to perform this kind of thing. There's all kinds of application areas for robotics today, in home health care, in agriculture, manufacturing, in warehouses, where we're really expanding human capabilities.
And all of these robots, just to bring this back to the edge question, All of these robots live on a range of different kinds of compute capabilities or compute devices, but they're typically in the mid range to high end of these compute devices. And some of them are comparable to what you might see in an autonomous car.
So autonomous vehicles is a whole other large domain that's continuing, that's growing fast. And they need some of the most advanced edge computing capabilities. Arguably. More challenging than a cell phone, but not necessarily. I think cell phones [00:43:00] are, are really a good prototype of what is a really difficult problem today.
And autonomous vehicles have major safety constraints. And so they have many, many compute requirements. I think the level of integration is not the limitation there, but they do need Powerful edge computing capabilities and edge AI is a major part of that.
Bill: Wow. This is, that gives me so much to think about across biology and engineering and computer science.
This is fantastic. Thanks so much for the perspective. How can people find you online and keep up with the latest work that you're doing?
Vikram: I think my website is probably the easiest place to find me. If you just Google my name, you will find it. It's one of the top hits. And my website is relatively up to date.
I don't think I would say it's fully up to date, but you can get all my papers there. And, and my, most of my current research projects, I think there's probably new ones that are not reflected there yet, but that's the best place to find me. Fantastic. It's easy. [00:44:00] It's easy. It's just vikram. cs. illinois. edu.
Good stuff.
Bill: All right, Vikram, thank you so much for the time today and for the perspective. I, you're working on some amazing stuff and I appreciate your sharing with us.
Vikram: Sure, it's my pleasure.
Ad: Capitalize on your edge to generate more value from your data. Learn more at dell. com slash edge.