this post was submitted on 25 Nov 2023
777 points (96.9% liked)

Technology

59593 readers
2967 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] cosmicrookie@lemmy.world 58 points 1 year ago* (last edited 1 year ago) (6 children)

It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either

[–] Strobelt@lemmy.world 34 points 1 year ago (1 children)

That's an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

[–] sukhmel@programming.dev -4 points 1 year ago (1 children)

How many people are you going to hold accountable if something was made by a team of ten people? Of a hundred people? Do you want to include everyone from designer to a QA?

Accountability should be reasonable, the ones who make decisions should be held accountable, companies at large should be held accountable, but making every last developer accountable is just a dream of a world where you do everything correctly and so nothing needs fixing. This is impossible in the real world, don't know if it's good or bad.

And from my experience when there's too much responsibility people tend to either ignore that and get crushed if anything goes wrong, or to don't get close to it or sabotage any work not to get anything working. Either way it will not get the results you may expect from holding everyone accountable

[–] Ultraviolet@lemmy.world 7 points 1 year ago

The CEO. They claim that "risk" justifies their exorbitant pay? Let them take some actual risk, hold them criminally liable for their entire business.

[–] Ultraviolet@lemmy.world 19 points 1 year ago

1979: A computer can never be held accountable, therefore a computer must never make a management decision.

2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.

[–] synthsalad@mycelial.nexus 3 points 1 year ago

AI does not require a raise for doing something right either

Well, not yet. Imagine if reward functions evolve into being paid with real money.

[–] zalgotext@sh.itjust.works 3 points 1 year ago (1 children)

You can't punish AI for doing something wrong.

Maybe I'm being pedantic, but technically, you do punish AIs when they do something "wrong", during training. Just like you reward it for doing something right.

[–] cosmicrookie@lemmy.world 4 points 1 year ago

But that is during training. I insinuated that you can't punish AI for making a mistake, when used in combat situations, which is very convenient for the ones intentionally wanting that mistake to happen

[–] recapitated@lemmy.world 3 points 1 year ago

Whether in military or business, responsibility should lie with whomever deploys it. If they're willing to pass the buck up to the implementor or designer, then they shouldn't be convinced enough to use it.

Because, like all tech, it is a tool.

[–] reksas@lemmings.world 0 points 1 year ago* (last edited 1 year ago) (1 children)

That is like saying you cant punish gun for killing people

edit: meaning that its redundant to talk about not being able to punish ai since it cant feel or care anyway. No matter how long pole you use to hit people with, responsibility of your actions will still reach you.

[–] cosmicrookie@lemmy.world 5 points 1 year ago (2 children)

Sorry, but this is not a valid comparison. What we're talking about here, is having a gun with AI built in, that decides if it should pull the trigger or not. With a regular gun you always have a human press the trigger. Now imagine an AI gun, that you point at someone and the AI decides if it should fire or not. Who do you account the death to at this case?

[–] reksas@lemmings.world 1 points 1 year ago (1 children)

The one who deployed the ai to be there to decide whether to kill or not

[–] cosmicrookie@lemmy.world 1 points 1 year ago (1 children)

I don't think that is what "autonomously decide to kill" means.

[–] reksas@lemmings.world 1 points 1 year ago* (last edited 1 year ago) (1 children)

Unless its actually sentient, being able to decide whether to kill or not is just more advanced targeting system. Not saying its good thing they are doing this at all, this almost as bad as using tactical nukes.

[–] cosmicrookie@lemmy.world 2 points 1 year ago (1 children)

It's the difference between programming it to do something and letting it learn though.

[–] reksas@lemmings.world 1 points 1 year ago

Letting it learn is just new technology that is possible. Not bad on its own but it has so much potential to be used for good and evil.

But yes, its pretty bad if they are creating machines that learn how to kill people by themselves. Create enough of them and its unknown amount of mistakes and negligence from actually becoming localized "ai uprising". And if in the future they create some bigger ai to manage bunch of them handily, possibly delegate production to it too because its more efficient and cheaper that way, then its even bigger danger.

Ai doesnt even need sentience to do unintended stuff, when I have used chatgpt to help me create scripts it sometimes seems to kind of decide on its own to do something in certain way that i didnt request or add something stupid. Though its usually also kind of my own fault for not defining what i want properly, but mistake like that is also really easy to make and if we are talking about defining who we want the ai to kill it becomes really awful to even think about.

And if nothing happens and it all works exactly as planned, its kind of even bigger problem because then we have country(s) with really efficient, unfeeling and massproduceable soldiers that do 100% as ordered, will not retreat on their own and will not stop until told to do so. With current political rise of certain types of people all around the world, this is even more distressing.

[–] Amir@lemmy.ml 0 points 1 year ago

The person holding the gun, just like always.