this post was submitted on 25 Nov 2023
777 points (96.9% liked)

Technology

59593 readers
3043 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] phoneymouse@lemmy.world 157 points 1 year ago* (last edited 1 year ago) (3 children)

Can’t figure out how to feed and house everyone, but we have almost perfected killer robots. Cool.

[–] sneezycat@sopuli.xyz 82 points 1 year ago (4 children)

Oh no, we figured it out, but killer robots are profitable while happiness is not.

[–] o2inhaler@lemmy.ca 32 points 1 year ago (5 children)

I would argue happiness is profitable, but would have to shared amongst the people. Killer robots are profitable for a concentrated group of people

load more comments (5 replies)
load more comments (3 replies)
load more comments (2 replies)
[–] pelicans_plight@lemmy.world 76 points 1 year ago (12 children)

Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from. At this point one of the biggest security threats to the U.S. and for that matter the entire world is the extremely low I.Q. of every one that is supposed to be protecting this world. But I think they do this all on purpose, I mean the day the Pentagon created ISIS was probably their proudest day.

[–] Snapz@lemmy.world 29 points 1 year ago (2 children)

The real problem (and the thing that will destroy society) is boomer pride. I've said this for a long time, they're in power now and they are terrified to admit that they don't understand technology.

So they'll make the wrong decisions, act confident and the future will pay the tab for their cowardice, driven solely by pride/fear.

load more comments (2 replies)
[–] zaphod@feddit.de 16 points 1 year ago (3 children)

Great, so I guess the future of terrorism will be fueled by people learning programming and figuring out how to make emps so they can send the murder robots back to where they came from.

Eh, they could've done that without AI for like two decades now. I suppose the drones would crashland in a rather destructive way due to the EMP, which might also fry some of the electronics rendering the drone useless without access to replacement components.

load more comments (3 replies)
load more comments (10 replies)
[–] Kraven_the_Hunter@lemmy.dbzer0.com 68 points 1 year ago (7 children)

The code name for this top secret program?

Skynet.

[–] stopthatgirl7@kbin.social 75 points 1 year ago* (last edited 1 year ago)

“Sci-Fi Author: In my book I invented the
Torment Nexus as a cautionary tale

Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus”

load more comments (6 replies)
[–] redcalcium@lemmy.institute 63 points 1 year ago* (last edited 1 year ago) (2 children)

"Deploy the fully autonomous loitering munition drone!"

"Sir, the drone decided to blow up a kindergarten."

"Not our problem. Submit a bug report to Lockheed Martin."

[–] Agent641@lemmy.world 64 points 1 year ago (1 children)

"Your support ticked was marked as duplicate and closed"

😳

[–] pivot_root@lemmy.world 33 points 1 year ago

Goes to original ticket:

Status: WONTFIX

"This is working as intended according to specifications."

[–] spirinolas@lemmy.world 17 points 1 year ago* (last edited 1 year ago) (1 children)

"Your military robots slaughtered that whole city! We need answers! Somebody must take responsibility!"

"Aaw, that really sucks starts rubbing nipples I'll submit a ticket and we'll let you know. If we don't call in 2 weeks...call again and we can go through this over and over until you give up."

"NO! I WANT TO TALK TO YOUR SUPERVISOR NOW"

"Suuure, please hold."

load more comments (1 replies)
[–] at_an_angle@lemmy.one 59 points 1 year ago (5 children)

“You can have ten or twenty or fifty drones all fly over the same transport, taking pictures with their cameras. And, when they decide that it’s a viable target, they send the information back to an operator in Pearl Harbor or Colorado or someplace,” Hamilton told me. The operator would then order an attack. “You can call that autonomy, because a human isn’t flying every airplane. But ultimately there will be a human pulling the trigger.” (This follows the D.O.D.’s policy on autonomous systems, which is to always have a person “in the loop.”)

https://www.businessinsider.com/us-closer-ai-drones-autonomously-decide-kill-humans-artifical-intelligence-2023-11

Yeah. Robots will never be calling the shots.

load more comments (5 replies)
[–] cosmicrookie@lemmy.world 58 points 1 year ago* (last edited 1 year ago) (14 children)

It's so much easier to say that the AI decided to bomb that kindergarden based on advanced Intel, than if it were a human choice. You can't punish AI for doing something wrong. AI does not require a raise for doing something right either

[–] Strobelt@lemmy.world 34 points 1 year ago (2 children)

That's an issue with the whole tech industry. They do something wrong, say it was AI/ML/the algorithm and get off with just a slap on the wrist.

We should all remember that every single tech we have was built by someone. And this someone and their employer should be held accountable for all this tech does.

load more comments (2 replies)
[–] Ultraviolet@lemmy.world 19 points 1 year ago

1979: A computer can never be held accountable, therefore a computer must never make a management decision.

2023: A computer can never be held accountable, therefore a computer must make all decisions that are inconvenient to take accountability for.

load more comments (12 replies)
[–] 1984@lemmy.today 56 points 1 year ago (3 children)

Future is gonna suck, so enjoy your life today while the future is still not here.

[–] Thorny_Insight@lemm.ee 29 points 1 year ago (1 children)

Thank god today doesn't suck at all

load more comments (1 replies)
load more comments (2 replies)
[–] BombOmOm@lemmy.world 45 points 1 year ago* (last edited 1 year ago) (2 children)

As an important note in this discussion, we already have weapons that autonomously decide to kill humans. Mines.

[–] Chuckf1366@sh.itjust.works 111 points 1 year ago (17 children)

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention. Comparing an autonomous murder machine to a mine is like comparing a flint lock pistol to the fucking gattling cannon in an a10.

[–] gibmiser@lemmy.world 62 points 1 year ago (1 children)

Well, an important point you and him. Both forget to mention is that mines are considered inhumane. Perhaps that means AI murdering should also be considered. Inhumane, and we should just not do it instead of allowing landmines.

[–] livus@kbin.social 26 points 1 year ago

This, jesus, we're still losing limbs and clearing mines from wars that were over decades ago.

An autonomous field of those is horror movie stuff.

[–] Chozo@kbin.social 28 points 1 year ago (1 children)

Imagine a mine that could move around, target seek, refuel, rearm, and kill hundreds of people without human intervention.

Pretty sure the entire DOD got a collective boner reading this.

load more comments (1 replies)
load more comments (15 replies)
load more comments (1 replies)
[–] Pirky@lemmy.world 34 points 1 year ago (3 children)

Horizon: Zero Dawn, here we come.

load more comments (3 replies)
[–] DoucheBagMcSwag@lemmy.dbzer0.com 32 points 1 year ago (3 children)

Did nobody fucking play Metal Gear Solid Peace Walker???

[–] nichos@programming.dev 19 points 1 year ago (1 children)
[–] DragonTypeWyvern@literature.cafe 25 points 1 year ago (1 children)

Or just, you know, have a moral compass in general.

load more comments (1 replies)
load more comments (2 replies)
[–] Immersive_Matthew@sh.itjust.works 30 points 1 year ago (2 children)

We are all worried about AI, but it is humans I worry about and how we will use AI not the AI itself. I am sure when electricity was invented people also feared it but it was how humans used it that was/is always the risk.

load more comments (2 replies)
[–] HiddenLayer5@lemmy.ml 26 points 1 year ago* (last edited 1 year ago) (3 children)

Remember: There is no such thing as an "evil" AI, there is such a thing as evil humans programming and manipulating the weights, conditions, and training data that the AI operates on and learns from.

[–] Zacryon@feddit.de 18 points 1 year ago

Evil humans also manipulated weights and programming of other humans who weren't evil before.

Very important philosophical issue you stumbled upon here.

load more comments (2 replies)
[–] uis@lemmy.world 26 points 1 year ago (6 children)

Doesn't AI go into landmines category then?

load more comments (6 replies)
[–] cows_are_underrated@feddit.de 25 points 1 year ago (3 children)

Saw a video where the military was testing a "war robot". The best strategy to avoid being killed by it was to stay u human liek(e.g. Crawling or rolling your way to the robot).

Apart of that, this is the stupidest idea I have ever heard of.

load more comments (3 replies)
[–] lemba@discuss.tchncs.de 25 points 1 year ago* (last edited 1 year ago) (1 children)
[–] cheese_greater@lemmy.world 18 points 1 year ago

ACAB

All C-Suite are Bastards

[–] unreasonabro@lemmy.world 25 points 1 year ago (4 children)

any intelligent creature, artificial or not, recognizes the pentagon as the thing that needs to be stopped first

load more comments (4 replies)
[–] yardy_sardley@lemmy.ca 23 points 1 year ago (1 children)

For the record, I'm not super worried about AI taking over because there's very little an AI can do to affect the real world.

Giving them guns and telling them to shoot whoever they want changes things a bit.

load more comments (1 replies)
[–] heygooberman@lemmy.today 22 points 1 year ago (2 children)

Didn't Robocop teach us not to do this? I mean, wasn't that the whole point of the ED-209 robot?

[–] aeronmelon@lemm.ee 35 points 1 year ago (3 children)

Every warning in pop culture (1984, Starship Troopers, Robocop) has been misinterpreted as a framework upon which to nail the populous to.

load more comments (3 replies)
load more comments (1 replies)
[–] solarzones@kbin.social 20 points 1 year ago

Now that’s a title I wish I never read.

[–] onlinepersona@programming.dev 19 points 1 year ago (2 children)

Makes me think of this great short movie Slaughterbots

load more comments (2 replies)
[–] MindSkipperBro12@lemmy.world 18 points 1 year ago* (last edited 1 year ago) (4 children)

For everyone who’s against this, just remember that we can’t put the genie back in the bottle. Like the A Bomb, this will be a fact of life in the near future.

All one can do is adapt to it.

load more comments (4 replies)
[–] AceFuzzLord@lemm.ee 18 points 1 year ago (6 children)

As disturbing as this is, it's inevitable at this point. If one of the superpowers doesn't develop their own fully autonomous murder drones, another country will. And eventually those drones will malfunction or some sort of bug will be present that will give it the go ahead to indiscriminately kill everyone.

If you ask me, it's just an arms race to see who build the murder drones first.

load more comments (6 replies)
[–] inconel@lemmy.ca 17 points 1 year ago
[–] CCF_100@sh.itjust.works 15 points 1 year ago (1 children)

Okay, are they actually insane?

load more comments (1 replies)
load more comments
view more: next ›