Technology
Discussing AI’s carbon footprint with Google's Sandra Calvo
Jenny Salmimäki
We sat down with Google’s Cloud engineer Sandra Calvo to discuss the intersection of AI and the environment.
AI has been taking over the world ever since Chat GPT was launched to public use. Even before that, AI has been utilized by companies to supercharge their products or boost their operational efficiency.
What’s received less scrutiny so far, are the environmental impacts of the growing utilization of AI. Each prompt results in energy usage, and thus carbon emissions. As an example, it’s been calculated that generating an image using a powerful AI model takes as much energy as fully charging a smartphone (MIT tech Review). It’s also been predicted that current AI technology could be on track to annually consume as much electricity as the entire country of Ireland (29.3 terawatt-hours per year, IEEE Spectrum).
So what gives?
We chatted with Sandra Calvo, Google’s Cloud engineer, to see what Google has done in the realm of AI, and how they’ve mitigated the negative environmental and social impacts of AI.
The carbon footprint of AI grows as its use around the world increases. How has Google addressed this issue?
Google has been working on recognizing the growing carbon footprint for a long time in everything we do. In 2007, Google became the first major company to be carbon neutral in all our operations. Ten years after that in 2017, we became the first major company to match 100% of our annual electricity use with renewable energy. At the same time we set a very ambitious goal in 2020 to be fully net zero emissions across all of our operations by 2030, every day of the year.
Taking the carbon footprint into account starts with the data centre efficiency. We utilize AI to predict energy consumption to optimize the usage of the compute power, reducing energy use.
Most of our data centers also only use renewable energy, so that’s what Google invests very heavily in. For example, Google’s data center in Hamina is the greenest in Europe at the moment. It uses renewable energy 97% of the time, which comes from the wind power produced in the Hamina wind farm.
“We do not believe that one model will be able to do everything, and that bigger is not better.”
Every time anyone does a search or uses our product it will use electricity and computing somewhere in our data centers around the world. The decision that has impacted, and will impact sustainability, is that we do not believe that one model will be able to do everything, and that bigger is not better.
If we need an app to help kids navigate to school, there’s no need to use the largest model we have, where we use a huge amount of computing and capacity to ask a simple query. That will impact the cost for the clients and also the sustainability.
Google also actively engages and collaborates with policy makers and discusses the environmental impact regarding AI and sustainability, and how to build solutions on this.
Overall, we’re very committed to reducing the carbon emissions in everything we do and influencing regulation to make sure that the future of AI is also sustainable.
How do you see the role of AI in promoting a more sustainable world and in mitigating climate change? Has there been any concrete examples of these yet?
It is true that AI is a powerful tool and it can be used to mitigate climate change. And it’s already doing that.
One example is agriculture, where we have clients that analyze the crop data to optimize how they use fertilizers, pest control, and the usage of water. We also have spatial data on Google Cloud Earth Engine – it’s an analytics platform that has geo-spatial images from all over the world. That data has been used for a while now to solve problems related to climate change, deforestation, health related topics – and for example predicting the mounds of mosquitoes that will spread diseases.
The US forest service uses it to track wildfires and how they move. Also an organization called Climate Engine uses Earth Engine to collect and process all the data that they then provide to the public sector and different organizations insights on climate change. By combining this information they can give real time data to understand the impact of floods or wildfires before, during and after they occur.
What kinds of ethical issues are there in the development of AI, and how has Google prepared for them?
Google has done AI for years, so we are not new to this. We started working on our first models over ten years ago, and our mission has always been to build an AI that is helpful for everyone.
The responsible AI topic and ethical aspects is something that we take very seriously, and in 2018 we published our AI principles.
There are also topics that Google will never pursue with AI, such as technologies that are likely to cause overall harm, like surveillance, weapons or other technology where people could get injured, or whose purpose is against human rights. There is no question about this at Google.
At the same time this is not static, these evolve all the time. As our experience in this space deepens, the list evolves along with it.
If we take a look at ethical development in AI and things we do, we’re very conscious about what kind of data we use to train our models. We do a lot of auditing and check fairness in everything we do to tackle bias and discrimination.
Also transparency and explainability is key, meaning, how can we make AI that is transparent so that people can actually know what data and terms the AI made the decision based on.
In terms of privacy and security, we have to give the users the control of the data they have and protect that nobody can get unauthorized access to their data through our services.
We are embedding our responsibility and principles in our every tool, but users don’t always get to see that. We have built support for safe models that detect sensitive content. So, if you have a chatbot created with our technology and it recognizes that the user is using violent or toxic language, the chatbot may not respond, and it flags the conversation in the backend alerting the creator of the bot.
In picture generation, if a user tries to generate a picture with our technology of a person committing a crime, the AI model will give the user an error and not generate it.
We really want to encourage people to use AI, but to do it responsibly.
Sandra Calvo is a cloud engineer at Google. Studied electrical engineering, and started her career in the energy network automation industry. Loves IoT and AI. Built an app to never be late from the bus. Part of the Mimmit koodaa community.