this post was submitted on 08 Jun 2024
2128 points (98.9% liked)

Programmer Humor

19522 readers
312 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] MossyFeathers@pawb.social 28 points 5 months ago

I'm... honestly kinda okay with it crashing. It'd suck because AI has a lot of potential outside of generative tasks; like science and medicine. However, we don't really have the corporate ethics or morals for it, nor do we have the economic structure for it.

AI at our current stage is guaranteed to cause problems even when used responsibly, because its entire goal is to do human tasks better than a human can. No matter how hard you try to avoid it, even if you do your best to think carefully and hire humans whenever possible, AI will end up replacing human jobs. What's the point in hiring a bunch of people with a hyper-specialized understanding of a specific scientific field if an AI can do their work faster and better? If I'm not mistaken, normally having some form of hyper-specialization would be advantageous for the scientist because it means they can demand more for their expertise (so long as it's paired with a general understanding of other fields).

However, if you have to choose between 5 hyper-specialized and potentially expensive human scientists, or an AI designed to do the hyper-specialized task with 2~3 human generalists to design the input and interpret the output, which do you go with?

So long as the output is the same or similar, the no-brainer would be to go with the 2~3 generalists and AI; it would require less funding and possibly less equipment - and that's ignoring that, from what I've seen, AI tends to be better than human scientists in hyper-specialized tasks (though you still need scientists to design the input and parse the output). As such, you're basically guaranteed to replace humans with AI.

We just don't have the society for that. We should be moving in that direction, but we're not even close to being there yet. So, again, as much potential as AI has, I'm kinda okay if it crashes. There aren't enough people who possess a brain capable of handling an AI-dominated world yet. There are too many people who see things like money, government, economics, etc as some kind of magical force of nature and not as human-made systems which only exist because we let them.