this post was submitted on 15 Mar 2025
186 points (98.9% liked)

Linux

6450 readers
598 users here now

A community for everything relating to the GNU/Linux operating system

Also check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Max_P@lemmy.max-p.me 8 points 12 hours ago (1 children)

Some modern workloads can take advantage of multiple computers. You can usually compile using things like distcc and spread the load across them.

If you make them into a Kubernetes cluster you can run many copies or many different things.

It's still an unsolved problem: we still end up with single core bottlenecks to this day, before even involving other machines altogether.

[โ€“] sxan@midwest.social 2 points 1 hour ago

Yes. It's always the bandwidth that's the main bottleneck, whether CPU-Memory, IPC, or the network.

Screw quantum computers; what we need is quantum entangled memory sharing at a distance. Imagine! Even if only within a single computer, all memory could could be L1 cache.