this post was submitted on 30 Nov 2023
335 points (98.8% liked)

Technology

59207 readers
2934 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] TimeSquirrel@kbin.social 3 points 11 months ago* (last edited 11 months ago) (5 children)

Isn't there some kind of diminishing returns on this, where it starts to make more sense to offload things to a GPU or something instead of piling on ever more CPU cores? There has to be a lot of inefficiencies in that many interconnects.

[–] agressivelyPassive@feddit.de 20 points 11 months ago

GPUs aren't really suitable for many workloads. These CPUs are typically used in servers, you can't really offload a docker container onto a GPU.

[–] hamsterkill@lemmy.sdf.org 16 points 11 months ago (1 children)

This is the type of processor companies want in things like VM servers that host large numbers of VMs.

GPU processing units are really good at only specific kinds of computation. These are still all-around processors.

[–] thelastknowngod@lemm.ee 2 points 11 months ago

Yep. I would LOVE one of these chips in a kubernetes node.

[–] _s10e@feddit.de 8 points 11 months ago

The alternative to multiple cores is a single core that runs faster. We tried this and hit a limit. So, it's many cores, now.

[–] namingthingsiseasy@programming.dev 1 points 11 months ago

GPUs are still pretty bad at handling conditional logic and are more optimized towards doing mathematical operations instead.

But you are right in the sense that people are exploring different kinds of hardware for workloads that are getting increasingly specific. We're not in a CPU vs GPU world anymore, but more like a "what kind of CPU do I need?" situation.

[–] Blackmist@feddit.uk 1 points 11 months ago

One of their benchmark graphs is for Stable Diffusion, showing how much faster their CPU runs it than a 96 core AMD Epyc CPU. I'm like 99% sure that a GPU would run that at least 10 times faster.