Deep Learning and Climate Change | Article Analysis

neural networks ai machine learning

As debates about climate change quite literally heat-up, many people are interested in the factors, behaviours and events driving these changes. As data scientists the use of machine learning models are becoming more and more relevant for the completion of new types of analytical tasks, and as technologies like Neural Network Models lead these advancements, it’s important to understand the positive and negative effects of their employment to derive whether they should be incorporated into best practice for the future. 

The article, Deep Learning and Climate Change by Lukas Biewald, discusses the representation of Deep Learning models in the media, and how they are often misrepresented in terms of their negative impact on the environment.

The criticism arises from a paper called Energy and Policy Considerations for Deep Learning in NLP, which uses tables like these to compare the energy and costs of the use of different types of learning models. From the table, it is quite clear that NAS, or Neural Architecture Search model, emits extreme amounts of CO2 and has enormous cloud computing costs. 

The negative portrayal in the media and through papers like these comes about as a result of a focus on the most complex types of Deep Learning including Neural Networks which subsequently have the greatest carbon emissions. As such, with climate change currently a top-of mind issue for many people, it is easy to focus on the raw numbers without contextualising the entire scenario. In this case, Biewald argues that yes, NAS models can be cause for concern if they are employed by a great number more businesses, but at the moment, the technologies behind these types of machine learning and far too complex and unnecessary for the conduction of the great majority of business tasks, and hence do not warrant restrictions on their use.

 
Figure 1: a NAS, or Neural Architecture Search model, can take thousands of hours to train but most businesses don’t need models as complex as these.

Figure 1: a NAS, or Neural Architecture Search model, can take thousands of hours to train but most businesses don’t need models as complex as these.

 

In the bigger picture, this article is interesting because it takes the perspective of an industry professional on how non-professionals or the media can influence messages to suit a narrative. That is, it is always important to understand the context of what a message claims, and the implications that context might have on its broader application. At White Box, some of our most important work is identifying what clients’ data means in the context of their own business and industry, and how the information gathered from their data can be used in a way that is impactful yet doesn’t stray away from the true message of the brand or business vision. As such, like in this article, it is sometimes extremely useful to have an ‘out of the box’ perspective on the real implications of trends, data and other factors influencing business, or the use of Deep Learning, in this case, in order to create clarity and drive development in a way that is beneficial, rather than crippling, to longer term goals.

Figure 2: it is worthy of mentioning, when contextualised, training neural architecture search models can omit 4x more carbon that an average car over its entire lifetime


For the original article, click here.

For more data analysis and visualisations, click here.

Or, get in touch for a discussion about your data strategy.

CommentaryGuest User