Understanding the Inner Workings of AI
- Courriel
-
Signet
-
Imprimer
Disponible en anglais seulement.
“The excitement around artificial intelligence comes from the fact that the machines now don't need humans' explicit instructions, but rather they look at past historical patterns based on historical data, and learn how to do those things themselves.”—Ebrahim Bagheri, professor at Toronto Metropolitan University.
Rosa van den Beemt, Director of Stewardship in Responsible Investment at BMO Global Asset Management sat down with Ebrahim Bagheri to discuss artificial intelligence today, from positive use cases, to ethics, to risks.
Listen to our ~25-minute Part 1 episode
Listen to our ~18-minute Part 2 episode
Sustainability Leaders podcast is live on all major channels including Apple and Spotify.
Part One
Ebrahim Baheri:
The excitement around artificial intelligence comes from the fact that the machines now don't need humans' explicit instructions, but rather they look at past historical patterns based on historical data, and learn how to do those things themselves.
Michael Torrance:
Welcome to Sustainability Leaders. I'm Michael Torrance, Chief Sustainability Officer with BMO Financial Group. On this show, we will talk with leading sustainability practitioners from the corporate, investor, academic, and NGO communities to explore how this rapidly evolving field of sustainability is impacting global investment, business practices, and our world.
Speaker 3:
The views expressed here are those of the participants, and not those of Bank of Montreal, its affiliates, or subsidiaries.
Rosa van den Beemt:
Hi, everyone. I am Rosa van den Beemt, Director of Stewardship in Responsible Investment at BMO Global Asset Management. Today we're joined by Ebrahim Bagheri, professor at Toronto Metropolitan University, formerly known as Ryerson. He is the Canadian Research Chair for Social Information Retrieval, and the NSERC Industrial Research Chair in Social Media Analytics. He's also the director of the NSERC CREATE program on the Responsible Development of AI, and the recipient of several awards, including the NSERC Synergy Award for Innovation in Outstanding Industry and Academia Collaboration.
We will be discussing artificial intelligence today, from positive use cases, to ethics, to risks. This two part series will help listeners understand the inner workings of AI today. Ebrahim, thanks so much for being here.
Ebrahim Baheri:
Thank you very much, Rosa, for having me. It's a pleasure.
Rosa van den Beemt:
Maybe we can start with some basics, because artificial intelligence, as I know from you, has been around for quite a long time already. But it seems like in the last few years people have come to equate it a little bit more with natural language processing tools like chat GPT. Can we start by briefly discussing what is AI? And what would be the difference, for example, between an algorithm and artificial intelligence?
Ebrahim Baheri:
Yeah, that's a great question. Algorithms have been around for many, many decades, if not centuries. And essentially what algorithms are, are structured workflows of how we accomplish things. So you can think of algorithms as workflows. But within computers, what algorithms are, are a set of instructions that the computer can follow efficiently.
If you write a program, you are essentially coding a set of instructions for the computer to do, and we call those algorithms. For more complex tasks, we need more efficient algorithms. People within algorithm design and computer science have been working on designing more efficient algorithms for certain problems.
Now, you can think of AI, also, as a set of algorithms, but what we are talking about AI right now is essentially statistical machine learning. And the expectation of statistical machine learning is for the machine to learn to create algorithms on its own. So essentially you want the machine to learn to do things without you explicitly instructing the machine to do those things. The excitement around artificial intelligence comes from the fact that the machines now don't need humans explicit instructions, but rather they look at past historical patterns, based on historical data, and learn how to do those things themselves. So if you have to distinguish between an AI and an algorithm is that AI will help you as humans to learn the algorithms based on observations from data.
Rosa van den Beemt:
That's really interesting and I think good to know in this day and age especially, what the difference is. I think it's been in the news a lot. Artificial intelligence seemingly has the potential to create a lot of efficiencies, like you said, help us solve problems, but it has also been likened to a potential threat to humanity. We've heard people say, "Oh, it might have potential impact similar to global pandemics or nuclear war." So what exactly is the impact and what are we dealing with? Is it this dangerous or what are your thoughts about that?
Ebrahim Baheri:
Right. I think what people are fearful of is the development of autonomous, decision-making systems, not necessarily AI. Because development of these autonomous decision-making systems will allow technology to act independently, whether it's based on AI or based on some other technology. I think the fear comes from the control moving away from humans to some other type of being, which does not necessarily share the same value system as humans.
Just as a hypothetical example, think we as humans rely on oxygen for livelihood. If you have a robot that works on fossil fuel or electricity, they don't need oxygen. Therefore, it is not to their best interest to follow the same sustainable development goals that we follow in terms of preserving the environment and things like that. They may start optimizing for other objectives, which is not to the best interest of the humans. I think it's more about the development of these autonomous decision-making systems, and obviously AI facilitates that process.
But I think before we've reached the stage where we're fearful for autonomous decision-making systems, I think there's certain preconditions that we need to think about. And one of those preconditions is the need for these AI systems to have some form of self-consciousness. So for us to see whether we're at a point where we could have AI systems that have self-consciousness, we should understand how these AI systems actually operate.
I think the public fascination with AI comes from generative AI, things like large language models, or image generation technology, or video generation technology. We call these broadly generative AI technology. And I think the public fascination comes from these algorithms. And if we maybe talk a little bit about how these technologies work, it would help us understand whether there is some consciousness involved with these algorithms, or are we actually at a point where these algorithms may start making autonomous decisions?
So just to give you an example of how these large language models work, essentially when you interact with a language model like Chat GPT, what essentially happens is that this algorithm, this AI system, is trying to complete the prompt that you've given the algorithm. If you start by typing in a certain question, or you write a sentence, what the algorithm, the generative AI system will try to do is it will try to predict the next word that should come after it. And then once it's generated the word, try to decide what's the next word and what's the next word.
Essentially it's a probabilistic model that says, "I've read all of the sentences in the world on the internet, off books, magazines, Wikipedia, and I've seen under what conditions do certain terms come after each other." So it learns this probability distribution over words. And so when you ask a question, it's probably seen that question, or a very similar question, or a similar type of sequence of terms before, and it knows what types of sentences or words need to come after it.
When we think about these language models, we should know that these are probability distributions over terms, but they're very realistic because the algorithm, the AI system has seen a lot of it. That is why a lot of the responses you see from a language model are very realistic, because they've been trained on a huge amount of data.
What I want to get at is while there is a sense that these algorithms have consciousness, but in fact they're very far from that stage, and there are certain other things that we should be worried about, and we can talk about those during our conversation.
Rosa van den Beemt:
Yeah, that's really helpful. And I think also what you were saying about the actual technology behind these large language models, it seems like they're capable of reproducing what already exists, but that might also mean that they're capable of reproducing information that might not be correct, that is out there. So I was wondering if we could talk a little bit about the impact currently, or perceived future impact. Of misinformation and how AI contributes or doesn't contribute to misinformation.
Ebrahim Baheri:
Yeah, that's a great point. I think generative AI actually brought AI technology to the forefront, to the public, but that doesn't mean that AI wasn't used before. AI has been used for many, many years now, specifically in the information sphere. For instance, you go on Amazon, you do purchases. You go on Google, you do searches. You go on certain news outlets, you read news. All of these are somehow powered by some AI decision-making tool behind the scenes. For instance, you go on an e-commerce platform, they will try to maximize, based on an AI systems, the likelihood of your purchase. They would like to make recommendations to you so that you buy certain product.
AI has been in the information ecosystem for a long time. I think when you think about dangers of AI, we should think about how AI has already been integrated within the information ecosystem. Think about search engines and the point that you made about misinformation. Search engines, the way they work is we call them, they operate based on relevance. A user searches for a certain term. The search engine will try to find the webpages or content that are most relevant to the user's query.
But over the years, the function of a search engine has changed a bit, so that search engines are also advertising platforms. The search engine wants you to be happy, so they give you relevant content, but from out of all the relevant content they can show you, they show you the ones that you're most likely to click on, and has the highest advertising value for the search engine.
Put that in the context of the amount of internet traffic that goes through a search engine. Over 95% of access to the internet starts with a search. Even if I know what I'm looking for, I still search for it and I find the page through the search engine, and then I click on it and go to that page, instead of typing the full URL.
Pretty much most of the experiences we have with the internet goes through the search engine. And think about just simple changes of the orders, of the links we get from the search engine actually impacts our decisions, the actions we take, the products we purchase, maybe the books we read, or maybe even our beliefs.
This is very powerful. So if I don't know anything about Covid and I just heard about Covid, it's coming up. The first set of information that I read are the things that I most likely believe in, and that's what search engine gives me. The issue is that the search engine is not only trying to give me the factual information, because it's very hard to determine what's factual and what's not. It's trying to optimize for relevance, and also optimize for advertising revenue.
This opens up this space for misinformation campaigns to appear, because if you have advertising money behind content that's being pushed through these information platforms such as social networks and search engines, then people can be funding certain information campaigns. And as long as there's ad revenue going behind it, the search engine or the social network platform will optimize based on AI systems. This could disrupt democracies, create lack of trust in democratic institutions and so on. It can also create these issues with privacy, because search engine will need to track, or social networks track your behavior on networks so they can personalize information access.
I think what we need to be thinking about, and this is my personal opinion, that AI systems have been deployed, at scale, in most if not all of our information platforms, digital information platforms. They're collecting our personal data. They're personalizing our access to information on every single second. And that shapes who we are, because we read online. We access information. We make judgments based on information that's provided to us, and all of that is shaped by the information that's being customized.
So without us knowing these information platforms driven by AI systems are actually shaping our beliefs, our judgments, and so on. And I think that's where we should really be focusing on at this point, because AI is already here. It's not something futuristic. It's shaping the fabric of our lives.
Rosa van den Beemt:
I have certainly been guilty myself of saying, "Oh, the algorithm really seems to know me," when it recommends a certain product or service and making light of it. But it is a really interesting question of it's very hard not to live online. A lot of our data is collected. There's issues around, like you mentioned, maybe data privacy consent and what consent means. And perhaps also interesting, we're recording this during a time when a large part of Hollywood writers and actors are on strike. There are negotiations about consent for scripts or image likeness to be used to train artificial intelligence, and to create maybe future AI-based art. So maybe we can talk a little bit about ownership, data ownership, intellectual property, even how it might impact the future of work and employment in certain sectors. Can you speak a little bit about that?
Ebrahim Baheri:
Oh yeah, absolutely. You touch on a very relevant topic with the Hollywood movement. There was recently two authors who filed a lawsuit against the Open AI. Claiming that Chat GPT had actually read their copyrighted books and used it, and those books should have been protected by copyright. The way they believe Chat GPT was trained on them was that, they think, the summaries that Chat GPT generates for their books is very accurate. And these authors believe that there would've been no other way for chat GPT to be able to generate such accurate summarizations of their books, unless it was trained on the book.
It's a fascinating time from the perspective of data ownership, intellectual property. I think the legal system now has to clarify what is derivative work under intellectual property law? And it really depends on which jurisdiction you're in, but it's the interpretation of what is fair use doctrine.
So fair use doctrine says if you have copyrighted material, still under some use cases, you can use the material without the owner's permission. So for instance, if you're doing news reporting, if you're doing in-classroom teaching, if you're doing research, you don't need to go and ask for permission to use the copyrighted material in these limited cases. And some governments actually believe that the first use doctrine actually applies to text and data mining. For instance, the UK government actually supports this idea that you could use copyrighted material to do data mining and text mining and so on. It's really not clear whether technology like Chat GPT, and Stable Diffusion and Imagine and all these different generative AI models should be training on copyrighted material or not. And I think that's something that we will be hearing a lot from.
But one point I want to make is, other than the issue of the intellectual property belonging to individuals, there is this more important and I think critical issue for us as human race, and that is creativity. Most of the generative AI technology replicates, with some degree of freedom, content that it's been trained on. It will read books, it will summarize them. It can write for you a new novel or paint a new picture and so on. But these are all inspired based on historical data that they've seen.
But what artists and scholars and authors do is beyond reading other people's work and generating similar content. They have this concept of creativity, and innovation, which is their work. And so, what will happen is if you allow generative AI to take over the landscape of scholarship, for instance, or artistry, what will happen is that artists, and scholars, and other individuals will probably face financial hardship. It will become a less attractive endeavor for people to engage in writing, and painting, and creating art and so on. And so there will be a vacuum within that space of creativity, which will eventually disadvantage the humanity overall, because you don't have people going in that direction. You will have algorithms replicating the same things over and over again.
I think there's two angles to this issue of intellectual property. One is immediately there's content that's out there belongs to people. What is our position on that? Because that directly impacts those people. There's this bigger issue of the impact on the humanity as a whole. What happens if you don't protect people's intellectual property, which will impact their livelihood and eventually will impact innovation and creativity overall?
Rosa van den Beemt:
Maybe before we wrap up part one of our two part series, can we discuss the positive applications of AI on sustainability outcomes? For those of us working on climate issues, we are seeing some positive potential for AI to help address climate change. Not solving climate change, but just through better measurement of emissions, suggestions of how to better reduce emissions, improving things like hazard projections of climate change effects such as sea level rises or extreme events like droughts or hurricanes or wildfires, and also the ability to help maybe with large scale climate modeling and scenario analysis. However, this would take, as with anything, a lot of education, and access to this type of technology. What are your thoughts on the positive applications of AI, and particularly maybe on sustainability outcomes?
Ebrahim Baheri:
Yeah, so as you know, I'm a computer scientist by training. So development of AI is my bread and butter. I hope my comments so far is not construed as being negative towards AI. I'm a fan of developing AI. I hear your points about AI having a lot of positive impacts. We've already seen, as you mentioned, applications of AI within various areas, and I'm hoping to discuss all of those with you on the second part.
What I want to point out is this methodological considerations of how to develop AI, and where we should be developing AI, and when, and for what purpose. So one way you can think about technology development is, we could say as engineers, or as computer scientists, we could say, "Technology is the goal. Technology development is the goal. We have the possibility of advancing it, therefore we will do it."
So that is one way to do it. And if you take this approach, this could lead you to very creative technology development streams. It will impact large numbers of people, a lot of industries revolutionized future of work, and so on. But that's one way of going about it, and that's pretty much what we've been doing so far. The downside of this is that now you will see systematic prejudices being exasperated by AI in various domains.
The other alternative pathway we could take is we could say, "The core of our belief system is the value of the society, the value of the environment we live in, the value of humanity, and therefore we care for our societal, environmental, and humanity challenges that we face. Those are the core things that we care about, and we are here to solve those problems."
Now, AI is also a tool that we could use to solve some of those problems. It's a different perspective, where we put one or the other at the center, and then we decide what we want to develop.
And so if you think about the sustainability development goals that you mentioned, you put the human at the center, the social issues, the environmental challenges, those are the core things we care about. Now you identify major pain points that you face with those SDGs. Now you think about, "How am I going to solve those problems?"
The way you would solve those, I think, is by participatory design. You involve the stakeholders that are impacted. Every single person matters. You talk to the different communities, you talk to the people who will be involved, the different industries, NGOs, governments, subpopulations. And you identify what is the process you would take. And then in this bigger ecosystem of problem solving, AI could also be one of the tools that you use to make things more efficient, for instance. Or you look at large amounts of data to help you make decisions. And so on.
This way, you make sure that those systematic prejudices that could be created by AI are avoided, so you thoughtfully engage with the problem. There's a lot of discussion now around different communities on how can we maximize the adoption of AI? And I think that's the wrong question to be asking, because there's no inherent value in the adoption of AI itself unless AI is actually being used for some good.
Are we looking at the right problems? Are we engaging many, many different stakeholders, many different people who are impacted? And are we considering their specific circumstances while developing AI? But AI, if we adopt this participatory design approach, I think AI has a lot to offer in that process as one component, but not the major playing component of this process.
Rosa van den Beemt:
So it's really looking at what are our problems? How can we solve for them? And how can AI be a tool as part of that process? Rather than leading with what could AI do for us? Let's just see and find out and throw a whole bunch at the wall and see what sticks.
Ebrahim Baheri:
Exactly.
Rosa van den Beemt:
Well, thank you so much, Ebrahim. Be sure everyone to join us for our next episode, as we dive deeper into the social implications of artificial intelligence.
Michael Torrance:
Thanks for listening to Sustainability Leaders. This podcast is presented by BMO Financial Group. To access all the resources we discussed in today's episode, and to see our other podcasts, visit us at bmo.com/sustainabilityleaders.
You can listen and subscribe free to our show on Apple Podcasts, or your favorite podcast provider. And we'll greatly appreciate a rating and review and any feedback that you might have.
Our show and resources are produced with support from BMO's Marketing team and Puddle Creative. Until next time, I'm Michael Torrance. Have a great week.
Speaker 5:
For BMO disclosures, please visit BMOCM.com/podcast/disclaimer.
Part Two
Ebrahim Bagheri:
Every time you submit a request to ChatGPT, you're generating about 1.5 grams of CO2 emissions, which is a huge amount of emissions if you think about the number of requests that are submitted to these generative AI models on a daily basis. So we should really be mindful that while generative AI technology creates these new pathways to revolutionizing different industries, although we don't see it, there's a lot of carbon footprint.
Michael Torrance:
Welcome to Sustainability Leaders. I'm Michael Torrance, chief sustainability officer with BMO Financial Group. On this show, we will talk with leading sustainability practitioners from the corporate, investor, academic, and NGO communities to explore how this rapidly evolving field of sustainability is impacting global investment, business practices, and our world.
Speaker 3:
The views expressed here are those of the participants and not those of Bank of Montreal, its affiliates, or subsidiaries.
Rosa van den Beemt:
Hi, everyone. Welcome to part two of our two-part series around artificial intelligence. I am Rosa van den Beemt, director of stewardship in Responsible Investment at BMO Global Asset Management, and I'm really thrilled to be joined again by Ebrahim Bagheri, professor at Toronto Metropolitan University, formerly known as Ryerson, the Canada Research Chair for Social Information Retrieval, and the NSERC Industrial Research Chair in Social Media Analytics.
He is also the director of the NSERC Create Program on the responsible development of AI and the recipient of several awards, including the NSERC Synergy Award for innovation in outstanding industry and academia collaboration. In our last episode, we ended on the positive applications of sustainability outcomes with AI. Ebrahim discussed, leading with identifying the problems need to be solved and then utilizing AI as a potential tool to solve the problem rather than starting with, what could AI possibly do for us? Ebrahim, welcome back.
Ebrahim Bagheri:
Thank you for having me, Rosa. It's a pleasure.
Rosa van den Beemt:
I was hoping we could start today's episode by diving further into the positive applications of sustainability outcomes with AI and discuss some of the use cases.
Ebrahim Bagheri:
For sure. Given we're talking about sustainability, I want to talk about AI impacting sustainability from how AI is being trained, and also how AI can positively impact sustainability goals. So maybe I'll start with talking about how AI is trained and how it can create some problems from a sustainability perspective, and then also talk about some of its positive use cases. So AI systems are trained on large clusters of supercomputers, and all of those supercomputers are often in large data centers. They all run on electricity. The electricity used to run the data centers have a sizable carbon footprint. So by carbon footprint, I mean CO2 emissions or equivalent.
When we think about data centers, large supercomputers, we might not actually know how much electricity they consume, how much energy they consume. So essentially, if you think about industry as we know it within the high-tech, about 15% of their energy consumption is on AI applications. Now, for us to get a sense of what that means in terms of emissions, let's talk about generative AI models. So GPT-3, which was the predecessor to ChatGPT, for training GPT-3, it generated 552 tons of CO2 emissions, just training that one model.
BLOOM, which is also another open source large language model developed by BigScience, it consumed 914 Wh, that's kilowatts per hour, of electricity and emitted about 360 kilograms over an 18-day period when it was handling about 230,000 requests. So when you think about if you do the math, this means every time you submit a request to ChatGPT, you're generating about 1.5 grams of CO2 emissions, which is a huge amount of emissions if you think about the number of requests that are submitted to these generative AI models on a daily basis. So we should really be mindful that while generative AI technology creates these new pathways to revolutionizing different industries, we should also be mindful that although we don't see it, there's a lot of carbon footprint coming from training, running, developing, and deploying generative AI models.
Now, on a more positive node, I think AI in general has also shown a lot of impact on various industries for good. For instance, one of my favorite areas is precision agriculture, which is primarily driven by AI. There's reports by the UN saying that you could reduce expenditure on agriculture by about 30% by using automated robots, drones, which allow you to harvest crops with higher precision. There was a very interesting report that I came across which said there's about $43 billion of lost crops every year due to weeds, but there's now AI-driven machine vision algorithms that identify, automatically identify invasive weeds and tell farmers how to apply weed control chemicals to optimize the growth of the crops. So precision agriculture, I think, is an area which really is benefiting and has the potential to benefit from AI a lot.
You mentioned climate change. I think while AI cannot solve the climate issue, but I think there are areas where AI is making a lot of impact. For instance, you have satellites orbiting the Earth in space. They're taking photos, and those can all be processed by image processing, computer vision algorithms to identify issues. For instance, you can monitor forest fires with computer vision. You can identify sources of carbon dioxide by processing these images.
Now, the areas of positive impact of AI, I think, is limitless. It's abundant areas where you can think about both at a macro level, thinking about problems we have as a society or as countries or populations, and also as individuals, think about poverty, fair distribution of resources, education, healthcare and so on. What we need to be mindful of is while these applications of AI for good are very exciting, we should really consider that when you develop AI, it's not one population that you impact. There is vast amount of different people that AI could impact.
So think about optimizing healthcare procedures, which is very important and we should do. But if you focus on developing AI for a certain population, then you may actually be developing algorithms that will actually disadvantage other populations. So we should always be thinking about a trade-off between what positive impacts AI can make and what are the other impacts that can have unintended consequences.
Historically, what we've seen is that underprivileged populations are the populations that are being disadvantaged, because most of the investment is happening within communities that are historically privileged communities. They have the resources to advance AI, and therefore, the problems that are defined are problems for more privileged populations. The solutions that are created is based on the data that's collected from these more privileged communities, and therefore, the AI algorithms and systems that are developed may not necessarily be transferable to these other populations.
So I go back, for instance, to precision agriculture. In a lot of industrialized countries, farming has become industrialized, so you have large farms. Therefore, you can apply drones. You can use robots within those farms. But when you go to the global south, farming is not as industrialized. So the applications of AI that we think will revolutionize agriculture may not even be transferable to these other countries. So we should actually be thinking about, how can we use AI for good and use AI for everyone's benefit?
Rosa van den Beemt:
That's interesting, because I know one of the things that you're passionate about as well is the accessibility benefits that AI could bring, which is, in one way, a positive application of AI to maybe underserved communities. Maybe we can talk about that first, and then I do want to come back to that division between the global north and global south.
Ebrahim Bagheri:
Yeah. Absolutely. I think you touch on an important point, access and privilege. And so technology has traditionally impacted access and privilege, and it's a two-way street. Privilege gives you access to technology, and access to technology will give you privilege. So when we think about AI, given the scope of its impact, it's actually changing the dynamics within access and privilege to a great extent. So when we read reports about AI and impact on economy, for instance, we say the market size is expected to grow to 660 billion by 2030, but the distribution of the wealth is not going to be as lucrative. The $660 billion will probably go to a small fraction of companies who lead. And so I think we should think about wealth generation and also the distribution of the wealth.
The wealth is generated by creating and deploying AI technology, and so the creation of AI technology is not just the intellectual process. It's also about creating reliable, clean, annotated data. And so if you don't have the clean data, regardless of how innovative your algorithm, you would not have the AI system. The process for creating reliable, clean, annotated data is very expensive and very time-consuming. So what is the solution right now? The solution is let's find jurisdictions which have lax labor regulation, low wages, and use those jurisdictions for data annotation. Right?
So what happens is data annotation workers are often from poorer nations which don't have access to technology development. And so why is this? Think about lack of oversight on workforce. There's no minimum wages. So there's this rhetoric that says, "Okay. The reason this labor is done in those countries is at least we are creating opportunity for people to work from underdeveloped countries." But think about the reality. I think the effect of distribution of work globally is positively impacting a certain population and negatively impacting another subpopulation, whereas the wealth is distributed one way, and the heavy lifting of the work, which is often quite hard to do, on a different subpopulation.
So we should be mindful of this. When we think about generation of wealth, we should also be thinking about this concept of access and privilege. Certain populations are privileged because they have access to the technology, and another subset of population actually don't have access to the technology. They're creating technology that they could, themselves, not use for the benefit of another group.
Rosa van den Beemt:
Seems like there are a lot of parallels with the existing inequalities of global wealth creation in general, or even things like climate change negatively impacts already vulnerable communities, although wealthier nations have traditionally contributed much more to global emissions, or if you look at other types of supply chains where workers in developing countries might make clothing for consumers in the global north, which they themselves would not be able to buy. That's really interesting and something that I think hasn't been covered a lot, or at least I had not read that much about it. If we can switch a little bit to thinking about developing AI in a matter that is responsible, what are key elements to bring into the design and application of artificial intelligence to make sure that there are safeguards in place to guard against unintended consequences?
Ebrahim Bagheri:
I think that's a billion-dollar question, Rosa, and I wish I had the answer to that. But I have some thoughts on what it means to do responsible development of AI. My first thought is what I mentioned in the previous episode, and that is identifying what our core values are and what are the problems that we want to solve. Do we value technology development inherently, or do we value technology development because it can solve some of our core problems? So it's a matter of setting priorities. If the priority setting is to do technology development for the sake of humanity, then we should be focusing on, what is the problem that we have, and how can we best tackle it, and should AI have a part, and in that process, think about participatory design where we engage with every single individual who will be impacted through a democratic process.
So I think that's the process for responsible development. But if you ask me about the steps, I think the most important step that we need to take is education, and I say that a little bit from my role as an educator at the university. I think education is very important. So misinformation has been exacerbated because of AI. So there are algorithms that tell you what type of information you write that people will read most, and believe most, and send to their friends most. But in contrast, there are a lot of people who say, "Okay. How can we develop AI technology that prevents misinformation?" It's the question of, could we develop technology that stops technology? If you think that way, I think you will never actually find a solution, because as soon as you find an algorithm that stops the other algorithm, the other group will develop another algorithm.
So it's always a never-ending race, and as Scandinavian countries like Finland have shown, addressing misinformation should be through a process that creates a resilient society, not through technology only. So their solution has been, "Let's go to our schools, educate our students, helping them be resilient against misinformation." Right? So if every single person in the society is educated to make critical judgments, then you're safe against misinformation, and I think the same analogy applies to the responsible development of AI. Right? So if you want to create AI that's responsible, you should educate your general population about AI, what is AI, where should it be used, what is its potential, how does it impact the rights and privacy, and then, on the other hand, also educate our engineers, computer scientists, data scientists, AI developers about their legal and ethical responsibilities and let them know that even if they're working for high-tech and the high-tech is obviously responsible, they have legal and moral responsibility to understand the impact of the work that they're doing.
So you educate your AI engineers to say, "Hey. Look, you have the legal and moral responsibility to uphold values that are important." Although your supervisor is saying, "Let's collect all this personal data and build this next great accurate algorithm," at some point, you should say, "Doesn't this violate people's privacy? Maybe we should not do it." So I think education is key, both at the level of the people that develop technology and also people who use it, the general population.
Obviously, I don't want to downplay the role regulation enforcement plays. That's really key. But without the education component, even if you have the best legal framework, you have the best regulation, if you don't have people educated about it, it's very hard to enforce. So it's all about public awareness, understanding of impacts of AI, and also making sure people who develop AI know that they're legally and morally responsible.
Rosa van den Beemt:
That is a wonderful way to end this conversation.
Ebrahim Bagheri:
Thank you, Rosa, for the opportunity. It's lovely to talk to you.
Rosa van den Beemt:
Likewise, Ebrahim. Thank you so much. It's been a joy.
Ebrahim Bagheri:
Thank you.
Michael Torrance:
Thanks for listening to Sustainability Leaders. This podcast is presented by BMO Financial Group. To access all the resources we discussed in today's episode and to see our other podcasts, visit us at bmo.com/sustainabilityleaders. You can listen and subscribe free to our show on Apple Podcasts or your favorite podcast provider, and we'll greatly appreciate a rating and review and any feedback that you might have. Our show and resources are produced with support from BMO's marketing team and Puddle Creative. Until next time, I'm Michael Torrance. Have a great week.
Speaker 5:
For BMO disclosures, please visit bmocm.com/podcast/disclaimer.
Understanding the Inner Workings of AI
Vice-présidente et analyste, investissement responsable
Mme van den Beemt est une professionnelle de la mobilisation d’entreprises qui compte plus de six ans d’expérience dans le secteur de l&rsqu…
Mme van den Beemt est une professionnelle de la mobilisation d’entreprises qui compte plus de six ans d’expérience dans le secteur de l&rsqu…
VOIR LE PROFIL COMPLET- Temps de lecture
- Écouter Arrêter
- Agrandir | Réduire le texte
Disponible en anglais seulement.
“The excitement around artificial intelligence comes from the fact that the machines now don't need humans' explicit instructions, but rather they look at past historical patterns based on historical data, and learn how to do those things themselves.”—Ebrahim Bagheri, professor at Toronto Metropolitan University.
Rosa van den Beemt, Director of Stewardship in Responsible Investment at BMO Global Asset Management sat down with Ebrahim Bagheri to discuss artificial intelligence today, from positive use cases, to ethics, to risks.
Listen to our ~25-minute Part 1 episode
Listen to our ~18-minute Part 2 episode
Sustainability Leaders podcast is live on all major channels including Apple and Spotify.
Part One
Ebrahim Baheri:
The excitement around artificial intelligence comes from the fact that the machines now don't need humans' explicit instructions, but rather they look at past historical patterns based on historical data, and learn how to do those things themselves.
Michael Torrance:
Welcome to Sustainability Leaders. I'm Michael Torrance, Chief Sustainability Officer with BMO Financial Group. On this show, we will talk with leading sustainability practitioners from the corporate, investor, academic, and NGO communities to explore how this rapidly evolving field of sustainability is impacting global investment, business practices, and our world.
Speaker 3:
The views expressed here are those of the participants, and not those of Bank of Montreal, its affiliates, or subsidiaries.
Rosa van den Beemt:
Hi, everyone. I am Rosa van den Beemt, Director of Stewardship in Responsible Investment at BMO Global Asset Management. Today we're joined by Ebrahim Bagheri, professor at Toronto Metropolitan University, formerly known as Ryerson. He is the Canadian Research Chair for Social Information Retrieval, and the NSERC Industrial Research Chair in Social Media Analytics. He's also the director of the NSERC CREATE program on the Responsible Development of AI, and the recipient of several awards, including the NSERC Synergy Award for Innovation in Outstanding Industry and Academia Collaboration.
We will be discussing artificial intelligence today, from positive use cases, to ethics, to risks. This two part series will help listeners understand the inner workings of AI today. Ebrahim, thanks so much for being here.
Ebrahim Baheri:
Thank you very much, Rosa, for having me. It's a pleasure.
Rosa van den Beemt:
Maybe we can start with some basics, because artificial intelligence, as I know from you, has been around for quite a long time already. But it seems like in the last few years people have come to equate it a little bit more with natural language processing tools like chat GPT. Can we start by briefly discussing what is AI? And what would be the difference, for example, between an algorithm and artificial intelligence?
Ebrahim Baheri:
Yeah, that's a great question. Algorithms have been around for many, many decades, if not centuries. And essentially what algorithms are, are structured workflows of how we accomplish things. So you can think of algorithms as workflows. But within computers, what algorithms are, are a set of instructions that the computer can follow efficiently.
If you write a program, you are essentially coding a set of instructions for the computer to do, and we call those algorithms. For more complex tasks, we need more efficient algorithms. People within algorithm design and computer science have been working on designing more efficient algorithms for certain problems.
Now, you can think of AI, also, as a set of algorithms, but what we are talking about AI right now is essentially statistical machine learning. And the expectation of statistical machine learning is for the machine to learn to create algorithms on its own. So essentially you want the machine to learn to do things without you explicitly instructing the machine to do those things. The excitement around artificial intelligence comes from the fact that the machines now don't need humans explicit instructions, but rather they look at past historical patterns, based on historical data, and learn how to do those things themselves. So if you have to distinguish between an AI and an algorithm is that AI will help you as humans to learn the algorithms based on observations from data.
Rosa van den Beemt:
That's really interesting and I think good to know in this day and age especially, what the difference is. I think it's been in the news a lot. Artificial intelligence seemingly has the potential to create a lot of efficiencies, like you said, help us solve problems, but it has also been likened to a potential threat to humanity. We've heard people say, "Oh, it might have potential impact similar to global pandemics or nuclear war." So what exactly is the impact and what are we dealing with? Is it this dangerous or what are your thoughts about that?
Ebrahim Baheri:
Right. I think what people are fearful of is the development of autonomous, decision-making systems, not necessarily AI. Because development of these autonomous decision-making systems will allow technology to act independently, whether it's based on AI or based on some other technology. I think the fear comes from the control moving away from humans to some other type of being, which does not necessarily share the same value system as humans.
Just as a hypothetical example, think we as humans rely on oxygen for livelihood. If you have a robot that works on fossil fuel or electricity, they don't need oxygen. Therefore, it is not to their best interest to follow the same sustainable development goals that we follow in terms of preserving the environment and things like that. They may start optimizing for other objectives, which is not to the best interest of the humans. I think it's more about the development of these autonomous decision-making systems, and obviously AI facilitates that process.
But I think before we've reached the stage where we're fearful for autonomous decision-making systems, I think there's certain preconditions that we need to think about. And one of those preconditions is the need for these AI systems to have some form of self-consciousness. So for us to see whether we're at a point where we could have AI systems that have self-consciousness, we should understand how these AI systems actually operate.
I think the public fascination with AI comes from generative AI, things like large language models, or image generation technology, or video generation technology. We call these broadly generative AI technology. And I think the public fascination comes from these algorithms. And if we maybe talk a little bit about how these technologies work, it would help us understand whether there is some consciousness involved with these algorithms, or are we actually at a point where these algorithms may start making autonomous decisions?
So just to give you an example of how these large language models work, essentially when you interact with a language model like Chat GPT, what essentially happens is that this algorithm, this AI system, is trying to complete the prompt that you've given the algorithm. If you start by typing in a certain question, or you write a sentence, what the algorithm, the generative AI system will try to do is it will try to predict the next word that should come after it. And then once it's generated the word, try to decide what's the next word and what's the next word.
Essentially it's a probabilistic model that says, "I've read all of the sentences in the world on the internet, off books, magazines, Wikipedia, and I've seen under what conditions do certain terms come after each other." So it learns this probability distribution over words. And so when you ask a question, it's probably seen that question, or a very similar question, or a similar type of sequence of terms before, and it knows what types of sentences or words need to come after it.
When we think about these language models, we should know that these are probability distributions over terms, but they're very realistic because the algorithm, the AI system has seen a lot of it. That is why a lot of the responses you see from a language model are very realistic, because they've been trained on a huge amount of data.
What I want to get at is while there is a sense that these algorithms have consciousness, but in fact they're very far from that stage, and there are certain other things that we should be worried about, and we can talk about those during our conversation.
Rosa van den Beemt:
Yeah, that's really helpful. And I think also what you were saying about the actual technology behind these large language models, it seems like they're capable of reproducing what already exists, but that might also mean that they're capable of reproducing information that might not be correct, that is out there. So I was wondering if we could talk a little bit about the impact currently, or perceived future impact. Of misinformation and how AI contributes or doesn't contribute to misinformation.
Ebrahim Baheri:
Yeah, that's a great point. I think generative AI actually brought AI technology to the forefront, to the public, but that doesn't mean that AI wasn't used before. AI has been used for many, many years now, specifically in the information sphere. For instance, you go on Amazon, you do purchases. You go on Google, you do searches. You go on certain news outlets, you read news. All of these are somehow powered by some AI decision-making tool behind the scenes. For instance, you go on an e-commerce platform, they will try to maximize, based on an AI systems, the likelihood of your purchase. They would like to make recommendations to you so that you buy certain product.
AI has been in the information ecosystem for a long time. I think when you think about dangers of AI, we should think about how AI has already been integrated within the information ecosystem. Think about search engines and the point that you made about misinformation. Search engines, the way they work is we call them, they operate based on relevance. A user searches for a certain term. The search engine will try to find the webpages or content that are most relevant to the user's query.
But over the years, the function of a search engine has changed a bit, so that search engines are also advertising platforms. The search engine wants you to be happy, so they give you relevant content, but from out of all the relevant content they can show you, they show you the ones that you're most likely to click on, and has the highest advertising value for the search engine.
Put that in the context of the amount of internet traffic that goes through a search engine. Over 95% of access to the internet starts with a search. Even if I know what I'm looking for, I still search for it and I find the page through the search engine, and then I click on it and go to that page, instead of typing the full URL.
Pretty much most of the experiences we have with the internet goes through the search engine. And think about just simple changes of the orders, of the links we get from the search engine actually impacts our decisions, the actions we take, the products we purchase, maybe the books we read, or maybe even our beliefs.
This is very powerful. So if I don't know anything about Covid and I just heard about Covid, it's coming up. The first set of information that I read are the things that I most likely believe in, and that's what search engine gives me. The issue is that the search engine is not only trying to give me the factual information, because it's very hard to determine what's factual and what's not. It's trying to optimize for relevance, and also optimize for advertising revenue.
This opens up this space for misinformation campaigns to appear, because if you have advertising money behind content that's being pushed through these information platforms such as social networks and search engines, then people can be funding certain information campaigns. And as long as there's ad revenue going behind it, the search engine or the social network platform will optimize based on AI systems. This could disrupt democracies, create lack of trust in democratic institutions and so on. It can also create these issues with privacy, because search engine will need to track, or social networks track your behavior on networks so they can personalize information access.
I think what we need to be thinking about, and this is my personal opinion, that AI systems have been deployed, at scale, in most if not all of our information platforms, digital information platforms. They're collecting our personal data. They're personalizing our access to information on every single second. And that shapes who we are, because we read online. We access information. We make judgments based on information that's provided to us, and all of that is shaped by the information that's being customized.
So without us knowing these information platforms driven by AI systems are actually shaping our beliefs, our judgments, and so on. And I think that's where we should really be focusing on at this point, because AI is already here. It's not something futuristic. It's shaping the fabric of our lives.
Rosa van den Beemt:
I have certainly been guilty myself of saying, "Oh, the algorithm really seems to know me," when it recommends a certain product or service and making light of it. But it is a really interesting question of it's very hard not to live online. A lot of our data is collected. There's issues around, like you mentioned, maybe data privacy consent and what consent means. And perhaps also interesting, we're recording this during a time when a large part of Hollywood writers and actors are on strike. There are negotiations about consent for scripts or image likeness to be used to train artificial intelligence, and to create maybe future AI-based art. So maybe we can talk a little bit about ownership, data ownership, intellectual property, even how it might impact the future of work and employment in certain sectors. Can you speak a little bit about that?
Ebrahim Baheri:
Oh yeah, absolutely. You touch on a very relevant topic with the Hollywood movement. There was recently two authors who filed a lawsuit against the Open AI. Claiming that Chat GPT had actually read their copyrighted books and used it, and those books should have been protected by copyright. The way they believe Chat GPT was trained on them was that, they think, the summaries that Chat GPT generates for their books is very accurate. And these authors believe that there would've been no other way for chat GPT to be able to generate such accurate summarizations of their books, unless it was trained on the book.
It's a fascinating time from the perspective of data ownership, intellectual property. I think the legal system now has to clarify what is derivative work under intellectual property law? And it really depends on which jurisdiction you're in, but it's the interpretation of what is fair use doctrine.
So fair use doctrine says if you have copyrighted material, still under some use cases, you can use the material without the owner's permission. So for instance, if you're doing news reporting, if you're doing in-classroom teaching, if you're doing research, you don't need to go and ask for permission to use the copyrighted material in these limited cases. And some governments actually believe that the first use doctrine actually applies to text and data mining. For instance, the UK government actually supports this idea that you could use copyrighted material to do data mining and text mining and so on. It's really not clear whether technology like Chat GPT, and Stable Diffusion and Imagine and all these different generative AI models should be training on copyrighted material or not. And I think that's something that we will be hearing a lot from.
But one point I want to make is, other than the issue of the intellectual property belonging to individuals, there is this more important and I think critical issue for us as human race, and that is creativity. Most of the generative AI technology replicates, with some degree of freedom, content that it's been trained on. It will read books, it will summarize them. It can write for you a new novel or paint a new picture and so on. But these are all inspired based on historical data that they've seen.
But what artists and scholars and authors do is beyond reading other people's work and generating similar content. They have this concept of creativity, and innovation, which is their work. And so, what will happen is if you allow generative AI to take over the landscape of scholarship, for instance, or artistry, what will happen is that artists, and scholars, and other individuals will probably face financial hardship. It will become a less attractive endeavor for people to engage in writing, and painting, and creating art and so on. And so there will be a vacuum within that space of creativity, which will eventually disadvantage the humanity overall, because you don't have people going in that direction. You will have algorithms replicating the same things over and over again.
I think there's two angles to this issue of intellectual property. One is immediately there's content that's out there belongs to people. What is our position on that? Because that directly impacts those people. There's this bigger issue of the impact on the humanity as a whole. What happens if you don't protect people's intellectual property, which will impact their livelihood and eventually will impact innovation and creativity overall?
Rosa van den Beemt:
Maybe before we wrap up part one of our two part series, can we discuss the positive applications of AI on sustainability outcomes? For those of us working on climate issues, we are seeing some positive potential for AI to help address climate change. Not solving climate change, but just through better measurement of emissions, suggestions of how to better reduce emissions, improving things like hazard projections of climate change effects such as sea level rises or extreme events like droughts or hurricanes or wildfires, and also the ability to help maybe with large scale climate modeling and scenario analysis. However, this would take, as with anything, a lot of education, and access to this type of technology. What are your thoughts on the positive applications of AI, and particularly maybe on sustainability outcomes?
Ebrahim Baheri:
Yeah, so as you know, I'm a computer scientist by training. So development of AI is my bread and butter. I hope my comments so far is not construed as being negative towards AI. I'm a fan of developing AI. I hear your points about AI having a lot of positive impacts. We've already seen, as you mentioned, applications of AI within various areas, and I'm hoping to discuss all of those with you on the second part.
What I want to point out is this methodological considerations of how to develop AI, and where we should be developing AI, and when, and for what purpose. So one way you can think about technology development is, we could say as engineers, or as computer scientists, we could say, "Technology is the goal. Technology development is the goal. We have the possibility of advancing it, therefore we will do it."
So that is one way to do it. And if you take this approach, this could lead you to very creative technology development streams. It will impact large numbers of people, a lot of industries revolutionized future of work, and so on. But that's one way of going about it, and that's pretty much what we've been doing so far. The downside of this is that now you will see systematic prejudices being exasperated by AI in various domains.
The other alternative pathway we could take is we could say, "The core of our belief system is the value of the society, the value of the environment we live in, the value of humanity, and therefore we care for our societal, environmental, and humanity challenges that we face. Those are the core things that we care about, and we are here to solve those problems."
Now, AI is also a tool that we could use to solve some of those problems. It's a different perspective, where we put one or the other at the center, and then we decide what we want to develop.
And so if you think about the sustainability development goals that you mentioned, you put the human at the center, the social issues, the environmental challenges, those are the core things we care about. Now you identify major pain points that you face with those SDGs. Now you think about, "How am I going to solve those problems?"
The way you would solve those, I think, is by participatory design. You involve the stakeholders that are impacted. Every single person matters. You talk to the different communities, you talk to the people who will be involved, the different industries, NGOs, governments, subpopulations. And you identify what is the process you would take. And then in this bigger ecosystem of problem solving, AI could also be one of the tools that you use to make things more efficient, for instance. Or you look at large amounts of data to help you make decisions. And so on.
This way, you make sure that those systematic prejudices that could be created by AI are avoided, so you thoughtfully engage with the problem. There's a lot of discussion now around different communities on how can we maximize the adoption of AI? And I think that's the wrong question to be asking, because there's no inherent value in the adoption of AI itself unless AI is actually being used for some good.
Are we looking at the right problems? Are we engaging many, many different stakeholders, many different people who are impacted? And are we considering their specific circumstances while developing AI? But AI, if we adopt this participatory design approach, I think AI has a lot to offer in that process as one component, but not the major playing component of this process.
Rosa van den Beemt:
So it's really looking at what are our problems? How can we solve for them? And how can AI be a tool as part of that process? Rather than leading with what could AI do for us? Let's just see and find out and throw a whole bunch at the wall and see what sticks.
Ebrahim Baheri:
Exactly.
Rosa van den Beemt:
Well, thank you so much, Ebrahim. Be sure everyone to join us for our next episode, as we dive deeper into the social implications of artificial intelligence.
Michael Torrance:
Thanks for listening to Sustainability Leaders. This podcast is presented by BMO Financial Group. To access all the resources we discussed in today's episode, and to see our other podcasts, visit us at bmo.com/sustainabilityleaders.
You can listen and subscribe free to our show on Apple Podcasts, or your favorite podcast provider. And we'll greatly appreciate a rating and review and any feedback that you might have.
Our show and resources are produced with support from BMO's Marketing team and Puddle Creative. Until next time, I'm Michael Torrance. Have a great week.
Speaker 5:
For BMO disclosures, please visit BMOCM.com/podcast/disclaimer.
Part Two
Ebrahim Bagheri:
Every time you submit a request to ChatGPT, you're generating about 1.5 grams of CO2 emissions, which is a huge amount of emissions if you think about the number of requests that are submitted to these generative AI models on a daily basis. So we should really be mindful that while generative AI technology creates these new pathways to revolutionizing different industries, although we don't see it, there's a lot of carbon footprint.
Michael Torrance:
Welcome to Sustainability Leaders. I'm Michael Torrance, chief sustainability officer with BMO Financial Group. On this show, we will talk with leading sustainability practitioners from the corporate, investor, academic, and NGO communities to explore how this rapidly evolving field of sustainability is impacting global investment, business practices, and our world.
Speaker 3:
The views expressed here are those of the participants and not those of Bank of Montreal, its affiliates, or subsidiaries.
Rosa van den Beemt:
Hi, everyone. Welcome to part two of our two-part series around artificial intelligence. I am Rosa van den Beemt, director of stewardship in Responsible Investment at BMO Global Asset Management, and I'm really thrilled to be joined again by Ebrahim Bagheri, professor at Toronto Metropolitan University, formerly known as Ryerson, the Canada Research Chair for Social Information Retrieval, and the NSERC Industrial Research Chair in Social Media Analytics.
He is also the director of the NSERC Create Program on the responsible development of AI and the recipient of several awards, including the NSERC Synergy Award for innovation in outstanding industry and academia collaboration. In our last episode, we ended on the positive applications of sustainability outcomes with AI. Ebrahim discussed, leading with identifying the problems need to be solved and then utilizing AI as a potential tool to solve the problem rather than starting with, what could AI possibly do for us? Ebrahim, welcome back.
Ebrahim Bagheri:
Thank you for having me, Rosa. It's a pleasure.
Rosa van den Beemt:
I was hoping we could start today's episode by diving further into the positive applications of sustainability outcomes with AI and discuss some of the use cases.
Ebrahim Bagheri:
For sure. Given we're talking about sustainability, I want to talk about AI impacting sustainability from how AI is being trained, and also how AI can positively impact sustainability goals. So maybe I'll start with talking about how AI is trained and how it can create some problems from a sustainability perspective, and then also talk about some of its positive use cases. So AI systems are trained on large clusters of supercomputers, and all of those supercomputers are often in large data centers. They all run on electricity. The electricity used to run the data centers have a sizable carbon footprint. So by carbon footprint, I mean CO2 emissions or equivalent.
When we think about data centers, large supercomputers, we might not actually know how much electricity they consume, how much energy they consume. So essentially, if you think about industry as we know it within the high-tech, about 15% of their energy consumption is on AI applications. Now, for us to get a sense of what that means in terms of emissions, let's talk about generative AI models. So GPT-3, which was the predecessor to ChatGPT, for training GPT-3, it generated 552 tons of CO2 emissions, just training that one model.
BLOOM, which is also another open source large language model developed by BigScience, it consumed 914 Wh, that's kilowatts per hour, of electricity and emitted about 360 kilograms over an 18-day period when it was handling about 230,000 requests. So when you think about if you do the math, this means every time you submit a request to ChatGPT, you're generating about 1.5 grams of CO2 emissions, which is a huge amount of emissions if you think about the number of requests that are submitted to these generative AI models on a daily basis. So we should really be mindful that while generative AI technology creates these new pathways to revolutionizing different industries, we should also be mindful that although we don't see it, there's a lot of carbon footprint coming from training, running, developing, and deploying generative AI models.
Now, on a more positive node, I think AI in general has also shown a lot of impact on various industries for good. For instance, one of my favorite areas is precision agriculture, which is primarily driven by AI. There's reports by the UN saying that you could reduce expenditure on agriculture by about 30% by using automated robots, drones, which allow you to harvest crops with higher precision. There was a very interesting report that I came across which said there's about $43 billion of lost crops every year due to weeds, but there's now AI-driven machine vision algorithms that identify, automatically identify invasive weeds and tell farmers how to apply weed control chemicals to optimize the growth of the crops. So precision agriculture, I think, is an area which really is benefiting and has the potential to benefit from AI a lot.
You mentioned climate change. I think while AI cannot solve the climate issue, but I think there are areas where AI is making a lot of impact. For instance, you have satellites orbiting the Earth in space. They're taking photos, and those can all be processed by image processing, computer vision algorithms to identify issues. For instance, you can monitor forest fires with computer vision. You can identify sources of carbon dioxide by processing these images.
Now, the areas of positive impact of AI, I think, is limitless. It's abundant areas where you can think about both at a macro level, thinking about problems we have as a society or as countries or populations, and also as individuals, think about poverty, fair distribution of resources, education, healthcare and so on. What we need to be mindful of is while these applications of AI for good are very exciting, we should really consider that when you develop AI, it's not one population that you impact. There is vast amount of different people that AI could impact.
So think about optimizing healthcare procedures, which is very important and we should do. But if you focus on developing AI for a certain population, then you may actually be developing algorithms that will actually disadvantage other populations. So we should always be thinking about a trade-off between what positive impacts AI can make and what are the other impacts that can have unintended consequences.
Historically, what we've seen is that underprivileged populations are the populations that are being disadvantaged, because most of the investment is happening within communities that are historically privileged communities. They have the resources to advance AI, and therefore, the problems that are defined are problems for more privileged populations. The solutions that are created is based on the data that's collected from these more privileged communities, and therefore, the AI algorithms and systems that are developed may not necessarily be transferable to these other populations.
So I go back, for instance, to precision agriculture. In a lot of industrialized countries, farming has become industrialized, so you have large farms. Therefore, you can apply drones. You can use robots within those farms. But when you go to the global south, farming is not as industrialized. So the applications of AI that we think will revolutionize agriculture may not even be transferable to these other countries. So we should actually be thinking about, how can we use AI for good and use AI for everyone's benefit?
Rosa van den Beemt:
That's interesting, because I know one of the things that you're passionate about as well is the accessibility benefits that AI could bring, which is, in one way, a positive application of AI to maybe underserved communities. Maybe we can talk about that first, and then I do want to come back to that division between the global north and global south.
Ebrahim Bagheri:
Yeah. Absolutely. I think you touch on an important point, access and privilege. And so technology has traditionally impacted access and privilege, and it's a two-way street. Privilege gives you access to technology, and access to technology will give you privilege. So when we think about AI, given the scope of its impact, it's actually changing the dynamics within access and privilege to a great extent. So when we read reports about AI and impact on economy, for instance, we say the market size is expected to grow to 660 billion by 2030, but the distribution of the wealth is not going to be as lucrative. The $660 billion will probably go to a small fraction of companies who lead. And so I think we should think about wealth generation and also the distribution of the wealth.
The wealth is generated by creating and deploying AI technology, and so the creation of AI technology is not just the intellectual process. It's also about creating reliable, clean, annotated data. And so if you don't have the clean data, regardless of how innovative your algorithm, you would not have the AI system. The process for creating reliable, clean, annotated data is very expensive and very time-consuming. So what is the solution right now? The solution is let's find jurisdictions which have lax labor regulation, low wages, and use those jurisdictions for data annotation. Right?
So what happens is data annotation workers are often from poorer nations which don't have access to technology development. And so why is this? Think about lack of oversight on workforce. There's no minimum wages. So there's this rhetoric that says, "Okay. The reason this labor is done in those countries is at least we are creating opportunity for people to work from underdeveloped countries." But think about the reality. I think the effect of distribution of work globally is positively impacting a certain population and negatively impacting another subpopulation, whereas the wealth is distributed one way, and the heavy lifting of the work, which is often quite hard to do, on a different subpopulation.
So we should be mindful of this. When we think about generation of wealth, we should also be thinking about this concept of access and privilege. Certain populations are privileged because they have access to the technology, and another subset of population actually don't have access to the technology. They're creating technology that they could, themselves, not use for the benefit of another group.
Rosa van den Beemt:
Seems like there are a lot of parallels with the existing inequalities of global wealth creation in general, or even things like climate change negatively impacts already vulnerable communities, although wealthier nations have traditionally contributed much more to global emissions, or if you look at other types of supply chains where workers in developing countries might make clothing for consumers in the global north, which they themselves would not be able to buy. That's really interesting and something that I think hasn't been covered a lot, or at least I had not read that much about it. If we can switch a little bit to thinking about developing AI in a matter that is responsible, what are key elements to bring into the design and application of artificial intelligence to make sure that there are safeguards in place to guard against unintended consequences?
Ebrahim Bagheri:
I think that's a billion-dollar question, Rosa, and I wish I had the answer to that. But I have some thoughts on what it means to do responsible development of AI. My first thought is what I mentioned in the previous episode, and that is identifying what our core values are and what are the problems that we want to solve. Do we value technology development inherently, or do we value technology development because it can solve some of our core problems? So it's a matter of setting priorities. If the priority setting is to do technology development for the sake of humanity, then we should be focusing on, what is the problem that we have, and how can we best tackle it, and should AI have a part, and in that process, think about participatory design where we engage with every single individual who will be impacted through a democratic process.
So I think that's the process for responsible development. But if you ask me about the steps, I think the most important step that we need to take is education, and I say that a little bit from my role as an educator at the university. I think education is very important. So misinformation has been exacerbated because of AI. So there are algorithms that tell you what type of information you write that people will read most, and believe most, and send to their friends most. But in contrast, there are a lot of people who say, "Okay. How can we develop AI technology that prevents misinformation?" It's the question of, could we develop technology that stops technology? If you think that way, I think you will never actually find a solution, because as soon as you find an algorithm that stops the other algorithm, the other group will develop another algorithm.
So it's always a never-ending race, and as Scandinavian countries like Finland have shown, addressing misinformation should be through a process that creates a resilient society, not through technology only. So their solution has been, "Let's go to our schools, educate our students, helping them be resilient against misinformation." Right? So if every single person in the society is educated to make critical judgments, then you're safe against misinformation, and I think the same analogy applies to the responsible development of AI. Right? So if you want to create AI that's responsible, you should educate your general population about AI, what is AI, where should it be used, what is its potential, how does it impact the rights and privacy, and then, on the other hand, also educate our engineers, computer scientists, data scientists, AI developers about their legal and ethical responsibilities and let them know that even if they're working for high-tech and the high-tech is obviously responsible, they have legal and moral responsibility to understand the impact of the work that they're doing.
So you educate your AI engineers to say, "Hey. Look, you have the legal and moral responsibility to uphold values that are important." Although your supervisor is saying, "Let's collect all this personal data and build this next great accurate algorithm," at some point, you should say, "Doesn't this violate people's privacy? Maybe we should not do it." So I think education is key, both at the level of the people that develop technology and also people who use it, the general population.
Obviously, I don't want to downplay the role regulation enforcement plays. That's really key. But without the education component, even if you have the best legal framework, you have the best regulation, if you don't have people educated about it, it's very hard to enforce. So it's all about public awareness, understanding of impacts of AI, and also making sure people who develop AI know that they're legally and morally responsible.
Rosa van den Beemt:
That is a wonderful way to end this conversation.
Ebrahim Bagheri:
Thank you, Rosa, for the opportunity. It's lovely to talk to you.
Rosa van den Beemt:
Likewise, Ebrahim. Thank you so much. It's been a joy.
Ebrahim Bagheri:
Thank you.
Michael Torrance:
Thanks for listening to Sustainability Leaders. This podcast is presented by BMO Financial Group. To access all the resources we discussed in today's episode and to see our other podcasts, visit us at bmo.com/sustainabilityleaders. You can listen and subscribe free to our show on Apple Podcasts or your favorite podcast provider, and we'll greatly appreciate a rating and review and any feedback that you might have. Our show and resources are produced with support from BMO's marketing team and Puddle Creative. Until next time, I'm Michael Torrance. Have a great week.
Speaker 5:
For BMO disclosures, please visit bmocm.com/podcast/disclaimer.
Autre contenu intéressant
Comprendre l’incidence de la biodiversité sur les entreprises
Building for Tomorrow: Real Estate, Construction, and Sustainability
Les femmes entrepreneures favorisent la durabilité : réflexion sur les résultats du défi WE Empower lié aux objectifs de développement durable des Nations Unies
Pourquoi une politique liée à la chaleur extrême est importante pour les entreprises
Stratégies climatiques dans le secteur de l’immobilier commercial : gérer les risques
L’aspect économique de l’élimination du carbone : un entretien avec Deep Sky
Immeubles résidentiels à logements multiples carboneutres au Canada : Analyse du coût et de la valeur de l’actif
BMO Equity Research on the AI + Data Center Build Out: Sustainability Impacts, Second Order Beneficiaries
Le coût des risques climatiques dans le secteur agricole aux États-Unis
Making Renewable Energy Technology Accessible to Underserved Communities: GRID Alternatives in Conversation
Comptabilisation du carbone : Comment renforcer les plans climatiques des entreprises
Les progrès de la technologie des batteries alimentent l’optimisme au sujet de l’industrie des VE
Les femmes jouent un rôle de premier plan dans le domaine du climat et du développement durable
Le rôle de l’exploitation minière responsable dans la transition vers les énergies propres : entretien avec Rohitesh Dhawan, chef de la direction de l’ICMM
Décloisonner le développement durable pour l’intégrer aux fonctions de base
Températures extrêmes : comment les villes nord-américaines amplifient-elles le changement climatique?
Questions climatiques : rôle de plus en plus important des hauts dirigeants
Transforming the Textile Industry: Apparel Impact Institute in Conversation
Trois idées inspirées de la Semaine du climat pour passer à l’action à la COP28
Protecting Outdoor Spaces: The Conservation Alliance in Conversation
Building Meaningful Connections with Nature: Parks California in Conversation
Comment les investissements dans le captage du carbone peuvent générer des crédits carbone
Free, Prior and Informed Consent (FPIC): Mark Podlasly in Conversation
Comment les concessionnaires automobiles contribuent à la transition vers la carboneutralité
Les feux de forêt au Canada brûlent toujours: explications d’experts
Quick Listen: Michael Torrance on Empowering Your Organization to Operationalize Sustainability
Quick Listen: Darryl White on the Importance of US-Canada Partnership
North America’s Critical Minerals Advantage: Deep Dive on Community Engagement
Evolving Mining for a Sustainable Energy Transition: ICMM CEO Rohitesh Dhawan in Conversation
BMO Equity Research on BMO Radicle and the World of Carbon Credits
Public Policy and the Energy Transition: Howard Learner in Conversation
Taskforce on Nature-Related Financial Disclosure (TNFD) – A Plan for Integrating Nature into Business
ESG Trends in the Base Metal and Diversified Mining Industries: BMO Equity Research Report
COP27 : Les problèmes de sécurité énergétique et l’incertitude économique ralentiront-t-ils la transition climatique?
RoadMap Project: An Indigenous-led Paradigm Shift for Economic Reconciliation
On-Farm Carbon and Emissions Management: Opportunities and Challenges
Intégration des facteurs ESG dans les petites et moyennes entreprises : Conférence de Montréal
Investment Opportunities for a Net-Zero Economy: A Conversation at the Milken Institute Global Conference
How Hope, Grit, and a Hospital Network Saved Maverix Private Capital Founder John Ruffolo
Hydrogen’s Role in the Energy Transition: Matt Fairley in Conversation
Key Takeaways on Ag, Food, Fertilizer & ESG from BMO’s Farm to Market Conference
Building an ESG Business Case in the Food Sector: The Food Institute
Financer la transition vers la carboneutralité : une collaboration entre EDC et BMO
Refonte au Canada pour un monde carboneutre : Conversation avec Corey Diamond d’Efficacité énergétique Canada
The Role of Hydrogen in the Energy Transition: FuelCell Energy CEO Jason Few in Conversation
Tackling Climate Change in Metals and Mining: ICMM CEO Rohitesh Dhawan in Conversation
Why Changing Behaviour is Key to a Low Carbon Future – Dan Barclay
The Post 2020 Biodiversity Framework – A Discussion with Basile Van Havre
Using Geospatial Big Data for Climate, Finance and Sustainability
Part 2: Talking Energy Transition, Climate Risk & More with Bloomberg’s Patricia Torres
Part 1: Talking Energy Transition, Climate Risk & More with Bloomberg’s Patricia Torres
The Global Energy Transition: Darryl White & John Graham Discuss
The Risk of Permafrost Thaw on People, Infrastructure & Our Future Climate
Climate Change & Flood Risk: Implications for Real Estate Markets
Director of ESG at BMO Talks COP26 & the Changing ESG Landscape
Candidature du Canada pour accueillir le nouveau siège social de l'ISSB
Comprendre la Journée nationale de la vérité et de la réconciliation
Comprendre la Journée nationale de la vérité et de la réconciliation
Combler l’écart de richesse entre les groupes raciaux grâce à des actions mesurables
Biggest Trends in Food and Ag, From ESG to Inflation to the Supply Chain
Understanding Biodiversity Management: Best Practices and Innovation
The Changing Face of Sustainability: tentree for a Greener Planet
Favoriser l’autonomisation dans une perspective d’équité raciale et de genre
Episode 31: Valuing Natural Capital – A Discussion with Pavan Sukhdev
Episode 29: What 20 Years of ESG Engagement Can Teach Us About the Future
Episode 28: Bloomberg: Enhancing ESG Disclosure through Data-Driven Solutions
Episode 27: Preventing The Antimicrobial Resistance Health Crisis
Episode 25: Achieving Sustainability In The Food Production System
Episode 23: TC Transcontinental – A Market Leader in Sustainable Packaging
Episode 16: Covid-19 Implications and ESG Funds with Jon Hale
Episode 13: Faire face à la COVID-19 en optant pour des solutions financières durables
Épisode 09 : Le pouvoir de la collaboration en matière d'investissement ESG
Épisode 08 : La tarification des risques climatiques, avec Bob Litterman
Épisode 07 : Mobiliser les marchés des capitaux en faveur d’une finance durable
Épisode 06 : L’investissement responsable – Tendances et pratiques exemplaires canadiennes
Épisode 04 : Divulgation de renseignements relatifs à la durabilité : Utiliser le modèle de SASB
Épisode 03 : Taxonomie verte: le plan d'action pour un financement durable de l'UE
Épisode 02 : Analyser les risques climatiques pour les marchés financiers