Artificial intelligence is an exciting new technology that promises to make our lives easier, but have you ever thought about the carbon footprint implications?
In this article, we will take a look at the carbon footprint generated by artificial intelligence and why this is something really important to keep an eye on!
What do we know about AI generating a carbon footprint?
Artificial intelligence algorithms that power some of the most cutting-edge applications in technology, such as writing coherent passages of text or generating images from descriptions, can be difficult for computers to train, which require large amounts of electricity.
To give you an idea of how much energy we’re talking about, research suggests that the carbon footprint of training a single AI is as much as 284 tonnes of carbon dioxide equivalent — five times the lifetime emissions of an average car.
If you don’t think that sounds right, then check this out:
GPT-3, a powerful language model created by the A.I. company OpenAI, produced the equivalent of 552 metric tons of carbon dioxide during its training (the same amount that would be produced by driving 120 passenger cars for a year). Also, Google’s advanced chatbot Meena was measured to consume 96 metric tons of carbon dioxide equivalent, or about the same as powering more than 17 homes for a year.
What we know for sure is that deep-learning algorithms are usually evaluated in terms of accuracy, but maybe it’s also time to judge them by their environmental impact as well.
Before we delve into AI’s carbon footprint, let’s first understand the differences between carbon emissions and carbon footprint.
Before going into the serious topic, let’s try to understand what these things are:
CO2e, or carbon dioxide equivalent, is a unit of measurement for the amount of carbon dioxide released by human activities. The lower the CO2e, the less impact the activity has on the environment.
Carbon footprint, on the other hand, is the sum of all CO2 emissions induced by your activities over a certain period of time. Carbon footprints are usually calculated over the course of a year.
Actually, carbon dioxide is not harmful to human beings or other organisms – but carbon emissions damage our atmosphere.
The more CO2 we have in our atmosphere, the less radiation is able to escape, which means that instead of being released into space, this trapped energy becomes concentrated at Earth’s surface.
As CO2 levels continue increasing due to human activities, such as extracting, refining, transporting, and burning fossil materials, it causes global temperatures to increase and climate change.
There are other gases that trap heat as well – like methane and water vapour – but CO2 puts us at the greatest risk if it continues to accumulate in our atmosphere.
In any case, climate change and energy consumption are certainly a growing concern for most people, but the good news is: we can still act.
We need to build the future today
Artificial intelligence AI is all around us. You use it every day in the form of chatbots, digital assistants, and movie recommendations from streaming services that depend on deep learning and language processing – the way in which computer models are trained to recognise patterns in data.
Not to mention how AI research is critical to medicine, science, and security – so machine learning models and efficient hardware are a constant part of our lives.
Training these resources is energy-intensive and as we already know, this training requires powerful computers and lots of energy – both of which contribute to the outrageous amount of carbon emissions.
The computing power used in machine learning has grown 300,000-fold between 2012 and 2018. It’s not hard to see how artificial intelligence could have a major climate impact.
But this isn’t inevitable! Researchers at the University of Copenhagen created an open-source program that assesses and predicts carbon footprints for any given model trained with neural networks.
This program, called Carbontracker, is basically an add-on to deep learning machines as it uses the Python programming language, which makes it easier to integrate.
Carbontracker periodically collects measurements of the energy being used by distinct elements of the model training, as well as information about the carbon intensity of local or regional electricity sources.
It uses collected data to predict the duration, energy use, and carbon footprint of a training session. To make it easier for users to grasp the results intuitively, Carbontracker reports carbon footprint predictions in terms of kilometres travelled by car.
Researchers tested Carbontracker by running it while training two deep learning models to read medical images, for example on data sets of eye blood vessels, lung X-rays, and lung CT scans.
The program was able to predict the energy used in these system training sessions to within 4.9-19.1%, the carbon footprint to within 7.3-19.9%, and the duration of training to within 0.8-4.6%, they reported at a computer science workshop this past summer.
Once computer scientists are aware of the carbon footprint machine learning creates, they can take concrete and often simple steps to reduce impact.
The researchers also recommend the strategy of moving training stations for machine learning models into countries with low-carbon energy resources like Estonia, which has a 61 times lower emissions rate when compared to Sweden.
Another way to make things work for better is to choose low-carbon-intensity hours to train models, which can cut the carbon footprint of deep learning by three-quarters in Denmark and by half in Great Britain, as in many areas the carbon intensity of energy changes depending on the time of day.
As well, artificial intelligence researchers can also design algorithms to be as efficient as possible, minimising the computing power – and thus, AI’s carbon footprint – required for model training. They can choose more energy-efficient hardware and calibrate its settings with energy consumption use in mind.
“We propose the total energy and carbon footprint of model development and training is reported alongside accuracy and similar metrics,” they said, in order to make the sustainability of the environmental field more friendly overall.
What should we expect?
We need to take a step back and acknowledge that, while building larger neural networks can be useful for modelling intelligence in machines, it is not the only possible way.
We should push ourselves from the first principles of how we model artificial intelligence in systems, as the future of the planet Earth depends on it
We must understand we have a social responsibility when creating these new technologies that we are improving the planet and taking a holistic view of growth.
Our history tells us we cannot keep doing what we have always done in growth at any cost, and if we keep doing what we have always done, we will leave a world that cannot be saved for future generations.
Let’s also take into consideration (and inspiration) the human brain – which is the original source of intelligence. Our brains are incredibly efficient relative to today’s deep learning methods and weigh just a few pounds while requiring just 20 watts of energy, barely enough to power a dim lightbulb.
And yet, it represents the most powerful form of intelligence in the known universe.
Technology is growing – and that’s really great! Our task is to find a balanced solution to technology growth and reducing carbon emissions…not to increase carbon emissions on top of what’s already there, but also to keep in mind future growth and sustainability issues. How can we accomplish this?
Another option for us to build our future in the right direction is to use renewable energy! We must invest in sustainable development as much as we can.
In addition, developing innovative ways for us to collaborate in coming up with a solution through artificial neural networks will be extremely beneficial in addressing these environmental issues.
“If we want AI to continue to advance rapidly, we must reduce its environmental impact,” says John Cohn, an IBM fellow and member of the MIT-IBM Watson AI Lab. “The advantage of developing methods for making AI models smaller and more efficient is that they can also perform better.”