MIT moves toward greener, more sustainable artificial intelligence

While current artificial intelligence (AI) technology holds strategic and transformative potential, it isn’t always environmentally-friendly due to high energy consumption. To the rescue are researchers from Massachusetts Institute of Technology (MIT), who have devised a solution that not only lowers costs but, more importantly, reduces the AI model training’s carbon footprint.

Continue reading below

Our Featured Videos

graphic with bright blue lines to illustrate technological concepts

Back in June 2019, the University of Massachusetts at Amherst revealed that the amount of energy utilized in AI model training equaled 626,000 pounds of carbon dioxide. How so? Contemporary AI isn’t just run on a personal laptop or simple server. Rather, deep neural networks are deployed on diverse arrays of specialized hardware platforms. The level of energy consumption required to power such AI technologies is approximately five times the lifetime carbon emissions from an average American car, including its manufacturing. 

Related:  This AI food truck could bring fresh produce directly to you

Moreover, both Analytics Insight and Kepler Lounge warned that Google’s AlphaGo Zero — the AI that plays the game of Go against itself to self-learn — generated a massive 96 tons of carbon dioxide over 40 days of research training. That amount of carbon dioxide equals 1,000 hours of air travel as well as the annual carbon footprint of 23 American homes! The takeaway then? Numbers like these would make AI model deployment both unfeasible and unsustainable over time.

MIT’s research team has devised a groundbreaking automated AI system, termed a once-for-all (OFA) network, described in their paper here. This AI system — the OFA network — minimizes energy consumption by “decoupling training and search, to reduce the cost.” The OFA network was constructed based on automatic machine learning (AutoML) advancements. 

Essentially, the OFA network functions as a ‘mother’ network to numerous subnetworks. As the ‘mother’ network, it feeds its knowledge and past experiences to all the subnetworks, training them to operate independently without the need for further retraining. This is unlike previous AI technology that had to “repeat the network design process and retrain the designed network from scratch for each case. Their total cost gr[ew] linearly … as the number of deployment scenarios increase[d], which … result[ed] in excessive energy consumption and CO2 emission.”

closeup of a green piece of technology

In other words, with the OFA network in use, there is little need for additional retraining of subnetworks. This efficiency decreases costs, curtails carbon emissions and improves sustainability.

Assistant Professor Song Han, of MIT’s Department of Electrical Engineering and Computer Science, was the project’s lead researcher. He shared that, “Searching efficient neural network architectures has until now had a huge carbon footprint. But we reduced that footprint by orders of magnitude with these new methods.”

Also of particular interest was Chuang Gan, co-author of the MIT research paper, who added, “The model is really compact. I am very excited to see OFA can keep pushing the boundary of efficient deep learning on edge devices.”

Being compact means AI can progress towards miniaturization. That could spell next-generation advantages in green operations that improve environmental impact.

+ MIT News