A recent pre-print study by the University of Colorado Riverside and the University of Texas at Arlington investigated the amount of water used by large artificial intelligence models like OpenAI’s ChatGPT during training.
The researchers noted that the ‘water footprint’ of these models has gone unnoticed and distinguishes between ‘withdrawal’ and ‘consumption’ of water, with the latter referring to the water lost due to evaporation during usage in data centres. The study uses a framework to estimate that a 20-50 question conversation with ChatGPT may ‘drink’ a 500ml bottle of water, and when combined with the billions of users, the total water footprint becomes ‘extremely large’.
The study also highlights that the water consumed by these AI models is clean freshwater used to generate electricity and cool servers, and cannot be recycled. The researchers claim that training GPT-3 alone may have consumed a staggering 700,000 litres of water by Microsoft, the company that invested billions of dollars and partnered with OpenAI. The study also mentions Google’s Language Model for Dialogue Application (LaMDA), which powers the company’s AI chatbot Bard, and its significant water consumption in the order of millions of litres.
The researchers have expressed their concerns over the massive consumption of water by these AI chatbots and have called for the companies to address their ‘water footprint’ as part of their social responsibility. The study also notes that the data centres’ less energy-efficient facilities in Asia may increase water use threefold.
With the newly-launched GPT-4 AI system that has a larger model size, the researchers anticipate that the water footprint may also increase.
The study highlights the need for collective efforts to combat global water challenges and to prioritize addressing the water footprint of AI models. As these AI models continue to be used by billions of people, it is important for tech companies to take social responsibility and minimize their water consumption.
Post Your Comments