this post was submitted on 23 Nov 2024
551 points (95.8% liked)

Technology

59656 readers
2686 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Writing a 100-word email using ChatGPT (GPT-4, latest model) consumes 1 x 500ml bottle of water It uses 140Wh of energy, enough for 7 full charges of an iPhone Pro Max

top 50 comments
sorted by: hot top controversial new old
[–] gzerod200@lemmy.world 8 points 1 day ago (2 children)

Am I going insane? As far as I know cooling with water doesn’t consume the water, it just cycles through the system again. If anyone knows otherwise PLEASE tell me.

[–] Uncut_Lemon@lemmy.world 4 points 1 day ago

Industrial HVAC systems use water towers to cool the hot side of system. The method relies on physics of evaporative cooling to reduce temperatures of the water. The process requires water to be absorbed by atmosphere, to drive the cooling effect. (Lower the humidity, the higher the cooling efficiency is, as the air as greater potential to absorb and hold moisture).

The method is somewhat similar to power station cooling towers. Or even swamp coolers. (An odd example would be, experimental PC water cooling builds with 'bong coolers', which are evaporative coolers, built from drainage pipes)

[–] nutsack@lemmy.world 2 points 1 day ago (1 children)

yea i really don't know when or why they started measuring electricity in water

[–] Zementid@feddit.nl 1 points 1 day ago (1 children)

Maybe it's a valid measure in the future, albeit 500ml would be enough to power New York for a day (the state) by means of fusion.

[–] nutsack@lemmy.world 5 points 1 day ago

perplexity.ai says that one chat GPT query consumes half a liter of water O_O

im imagining a rack of servers just shooting out a fire hose of water directly into the garbage 24 hours a day

[–] DuckWrangler9000@lemmy.world 14 points 2 days ago (1 children)

These article titles are so crazy. Who thinks of this stuff?

[–] JasonDJ@lemmy.zip 10 points 2 days ago
[–] zerozaku@lemmy.world 16 points 2 days ago (2 children)

I have read the comments here and all I understand from my small brain is that, because we are using bigger models which are online, for simple tasks, this huge unnecessary power consumption is happening.

So, can the on-device NPUs we are getting on flagship mobile phones solve these problems, as we can do most of those simple tasks offline on-device?

[–] WolfLink@sh.itjust.works 8 points 2 days ago* (last edited 2 days ago) (1 children)

I’ve run an LLM on my desktop GPU and gotten decent results, albeit not nearly as good as what ChatGPT will get you.

Probably used less than 0.1Wh per response.

[–] Monsieurmouche@lemmy.world 1 points 1 day ago (2 children)

Is this for inferencing only? Do you include training?

[–] interdimensionalmeme@lemmy.ml 2 points 19 hours ago (1 children)

Training is a one time thing. Tge more it get use, the less energy per query it will take

[–] Monsieurmouche@lemmy.world 1 points 16 hours ago (1 children)

Good point. But considering the frequent retraining, the environmental impacts can only be spread on a finite number of queries.

[–] interdimensionalmeme@lemmy.ml 1 points 6 hours ago

They have already reached diminishing returns on training. It will become much less frequent soon. Retraining on the same data if there isn't a better method is useless. I think the ressources consumed per query should only include those actually used for inference. The rest can be dismissed as bad faith argumentation.

[–] WolfLink@sh.itjust.works 2 points 1 day ago

Inference only. I’m looking into doing some fine tuning. Training from scratch is another story.

load more comments (1 replies)
[–] maplebar@lemmy.world 29 points 3 days ago (1 children)

Mark my words: generative "AI" is the tech bubble of all tech bubbles.

It's an infinite supply of "content" in a world of finite demand. While fast, it is incredibly inefficient at creating anything, often including things with dubious quality at best. And finally, there seems to be very little consumer interest in paid-for, commercial generative AI services. A niche group of people are happy to use generative AI while it's available for free, but once companies start charging for access to services and datasets, the number of people who are interested in paying for it will obviously be significantly smaller.

Last I checked there was more than a TRILLION dollars of investment into generative AI across the US economy, with practically zero evidence of genuinely profitable business models that could ever lead to any return on investment. The entire thing is a giant money pit, and I don't see any way in which someone doesn't get left holding the $1,000,000,000,000 generative AI bag.

[–] AbsoluteChicagoDog@lemm.ee 11 points 3 days ago

Don't worry, we'll bail them out once the bubble bursts.

[–] narr1@lemmy.autism.place 90 points 3 days ago (2 children)

Hah! Haha! Hahahaahah! Ties well with this one news article that I glimpsed that claims that by 2030 the need for fresh water will be 140% of the world's freshwater reserves. Infinite growth forever!

[–] frunch@lemmy.world 25 points 3 days ago (1 children)

Time to buy stock in water lol

[–] SlopppyEngineer@lemmy.world 20 points 3 days ago

So, Nestlé stocks?

load more comments (1 replies)
[–] bruhduh@lemmy.world 16 points 2 days ago

🥵🥵🥵🔥🔥🔥💦💦💦

[–] bandwidthcrisis@lemmy.world 45 points 3 days ago (7 children)

140Wh seems off.

It's possible to run an LLM on a moderately-powered gaming PC (even a Steam Deck).

Those consume power in the range of a few hundred watts and they can generate replies in a seconds, or maybe a minute or so. Power use throttles down when not actually working.

That means a home pc could generate dozens of email-sized texts an hour using a few hundred watt-hours.

I think that the article is missing some factor, such as how many parallel users the racks they're discussing can support.

[–] ikidd@lemmy.world 8 points 2 days ago (1 children)

An article that thinks cooling is "consuming" should probably be questioned in all its claims.

[–] Soleos@lemmy.world 5 points 2 days ago

I think there's probably something wrong with the math around per-response water consumption, but it is true that evaporative cooling consumes potable water, in that the water cannot be reused until it cycles through the atmosphere and is recaptured from precipitation, same way you consume water by drinking and pissing it out, or agriculture consumes it for growing things. Fresh water usage is a major concern and bottleneck, especially with climate change. With the average data centre using 300k gallons of water per day, and Google's entire portfolio using 5bn gallons per day, it's not nothing.

[–] dan@upvote.au 3 points 2 days ago (1 children)

I like that the 140Wh is the part you decided to question, not the "consumes 1 x 500ml bottle of water"

[–] bandwidthcrisis@lemmy.world 1 points 1 day ago* (last edited 1 day ago)

That was covered pretty well already!

Or maybe it's using Fluidic logic.

[–] douglasg14b@lemmy.world 24 points 3 days ago* (last edited 3 days ago) (2 children)

You are conveniently ignoring model size here...

Which is a primary impact on power consumption.

And any other processing and augmentation being performed. System prompts and other things that are bloating the token size ...etc never mind the fact that you're getting a response almost immediately for something that an at home GPU cluster (not casual PC) would struggle with for many minutes, this isn't always a linear scale for power consumption.

You are also ignoring the realities of a data center. Where the device power usage isn't the only power consumption of the location, cooling must be taken into consideration as well. Redundant power switching also comes with a percentage loss in transmission efficiency which adds to power consumption and heat dispersion requirements.

load more comments (2 replies)
[–] DarkCloud@lemmy.world 15 points 3 days ago

The study that suggests 10-50 interactions with ChatGPT evaporates a whole bottle of water, doesn't account for the fact that cooling systems are enclosed....

...and that "study" is based on a bunch of assumptions, which include evaporation from local power plants, as well as the entire buildings GPT's servers are located in. It does this as if one user is served at a time, and the organizations involved (such as microsoft) do nothing BUT serve one use at a time. So the "study" (which isn't peer reviewed and never got published) pretends those buildings don't also serve bing, or windows, or all the other functions microsoft is involved with. It instead assumes whole buildings at microsoft are dedicated to serving just one user of ChatGPT at a time.

It also includes the manufacture of all the serve and graphics cards equipment, even though the former was used before ChatGPT, and will be used for other things as well... and the latter is only used in training.

You can check the study out yourself here:

http://arxiv.org/pdf/2304.03271

It's completely junk. Worthless. Even uses a click bait title, and keeps talking about "the secret water foot print" as if it's uncovering some conspiracy. It's bunk science.

P.S It also doesn't seem to understand that the bulk of GPT's training was a one time cost, paid in 2021, with one smaller update in 2023.

[–] teh7077@lemmy.today 24 points 3 days ago* (last edited 3 days ago) (1 children)

That's what I always thought when reading this and other articles about the estimated power consumption of GPT-4. Run a decent 7B LLM on consumer hardware like the steam deck and you got your e-mail in a minute with the fans barely spinning up.

Then I read that GPT-4 is supposedly a 1760B model. (https://en.m.wikipedia.org/wiki/GPT-4#Background) I don't know how energy usage would scale with model size exactly, but I'd consider it plausible that we are talking orders of magnitude above the typical local LLM.

considering that the email by the local LLM will be good enough 99% of the time, GPT may just be horribly inefficient, in order to score higher in some synthetic benchmarks?

[–] douglasg14b@lemmy.world 22 points 3 days ago (1 children)

Computational demands scale aggressively with model size.

And if you want a response back in a reasonable amount of time you're burning a ton of power to do so. These models are not fast at all.

[–] teh7077@lemmy.today 17 points 3 days ago (1 children)

Thanks for confirming my suspicion.

So, the whole debate about "environmental impact of AI" is not about generative AI as such at all. Really comes down to people using disproportionally large models for simple tasks that could be done just as well by smaller ones, run locally. Or worse yet, asking a behemoth model like GPT-4 about something that could and should have been a simple search engine query, which I (subjectively) feel has become a trend in everyday tech usage...

load more comments (1 replies)
[–] oldfart@lemm.ee 7 points 2 days ago

I would say a model like ChatGPT could use a bit more energy than 7B llama

[–] Naz@sh.itjust.works 8 points 3 days ago* (last edited 2 days ago) (6 children)

Datacenter LLM tranches are 7-8 H100s per user at full load which is around 4 kW.

Multiply that by generation time and you get your energy used. Say it takes 62 seconds to write an essay (a highly conservative figure).

That's 68.8 Wh, so you're right.

Source: I'm an AI enthusiast

load more comments (6 replies)
[–] frunch@lemmy.world 49 points 3 days ago (5 children)

I'm sure I'm missing out, but i have no interest in using chatbots and other LLMs etc. It floors me to see how much attention they get though, how much resources are being dumped into their development and use. Nuclear plants being reopened for the sake of AI?!!

I also assume there's a lot of things they're capable of that could be huge for science, and there's likely lots of big things happening behind closed doors that we're yet to see in the coming years. I know it's not all just chatbots.

The way this article strikes me though, is that it's pretty much just wasting resources for parlor-game level output. I don't know if i like the idea of people giving up their ability to write a basic letter or essay, not that my opinion on the matter is gonna change anything obviously 😅

[–] just_another_person@lemmy.world 22 points 3 days ago

Think of it like this: rich people accumulate more wealth by paying fewer people to accomplish more work faster, so it's worth burning through the worlds resources at breakneck speed to help the richies out, right?

load more comments (4 replies)
[–] WrenFeathers@lemmy.world 15 points 3 days ago (13 children)

Can we PLEASE shut that shit down? We were doing just fine without it.

load more comments (13 replies)
[–] vinnymac@lemmy.world 29 points 3 days ago (9 children)

Why does the article make it sound like cooling a data center results in constant water loss? Is this not a closed loop system?

I’m imagining a giant reservoir heat sink that runs throughout a complex to pull heat out of the surrounding environment where some liquid evaporates and needs to be replenished. But first of all we have more efficient liquid coolants, and second that would be a very lazy solution.

I wonder if they’ve considered geothermal for new data centers. You can run a geothermal loop in reverse and use the earth as a giant heat sink. It’s not water in the loop, it’s refrigerant, and it only needs to be replaced when you find the efficiency dropping, which can take decades.

[–] DarkCloud@lemmy.world 11 points 3 days ago

It is a closed loop, but the paper treats it as if it's an open loop, and counts all water use for the building, as well as all the water that went into creating any equipment used.... and the water that escapes power plants in powering the buildings.... it also includes any other buildings that might house related services. Here is the original "study" which is about what maths could be done given the above assumptions:

http://arxiv.org/pdf/2304.03271

In short, it has nothing to do with reality, and is more just an attempt at the authors to get their names out there (on bad science that the media is interested in publicizing for click bait reasons).

[–] Munkisquisher@lemmy.nz 13 points 3 days ago

Evaporative coolers save a ton of energy compared to refrigerator cycle closed loop systems. Like a swamp cooler, the hot liquid that comes from cooling the server is exposed to the atmosphere and enough evaporates off to cool the liquid by a decent percentage, then it's refrigerated before going back into the servers.

Data centre near me is using it and the fire service is used to be being called by people concerned the huge clouds of water vapor are smoke

[–] TheGrandNagus@lemmy.world 17 points 3 days ago (4 children)

Yes, the vast majority are closed loop systems and the water isn't really used up, like a lot of these headlines imply.

That's not to say the energy being used can't be put to better uses, though.

load more comments (4 replies)
load more comments (6 replies)
[–] AkatsukiLevi@lemmy.world 19 points 3 days ago (2 children)
load more comments (2 replies)
load more comments
view more: next ›