this post was submitted on 08 Jun 2024
362 points (98.1% liked)

Technology

59207 readers
3055 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] aniki@lemm.ee 157 points 5 months ago (8 children)

If companies are crying about it then it's probably a great thing for consumers.

Eat billionaires.

[–] General_Effort@lemmy.world 40 points 5 months ago

The California bill was co-sponsored by the Center for AI Safety (CAIS), a San Francisco-based non-profit run by computer scientist Dan Hendrycks, who is the safety adviser to Musk’s AI start-up, xAI. CAIS has close ties to the effective altruism movement, which was made famous by jailed cryptocurrency executive Sam Bankman-Fried.

Ahh, yes. Elon Musk, paragon of consumer protection. Let's just trust his safety guy.

[–] Supermariofan67@programming.dev 17 points 5 months ago (1 children)

Companies cry the same way about the bills to ban end to end encryption, and they're still bad for consumers too

[–] Cryophilia@lemmy.world 5 points 5 months ago
[–] VirtualOdour@sh.itjust.works 10 points 5 months ago

It's designed to give the big players a monopoly, seems bad for the majority of us

load more comments (5 replies)
[–] FrostyCaveman@lemm.ee 57 points 5 months ago (3 children)

I think Asimov had some thoughts on this subject

Wild that we’re at this point now

[–] leftzero@lemmynsfw.com 42 points 5 months ago (1 children)

Asimov didn't design the three laws to make robots safe.

He designed them to make robots break in ways that'd make Powell and Donovan's lives miserable in particularly hilarious (for the reader, not the victims) ways.

(They weren't even designed for actual safety in-world; they were designed for the appearance of safety, to get people to buy robots despite the Frankenstein complex.)

[–] FaceDeer@fedia.io 30 points 5 months ago (1 children)

I wish more people realized science fiction authors aren't even trying to make good predictions about the future, even if that's something they were good at. They're trying to make stories that people will enjoy reading and therefore that will sell well. Stories where nothing goes particularly wrong tend not to have a compelling plot, so they write about technology going awry so that there'll be something to write about. They insert scary stuff because people find reading about scary stuff to be fun.

There might actually be nothing bad about the Torment Nexus, and the classic sci-fi novel "Don't Create The Torment Nexus" was nonsense. We shouldn't be making policy decisions based off of that.

load more comments (1 replies)
[–] Voroxpete@sh.itjust.works 13 points 5 months ago (1 children)

Asimov's stories were mostly about how it would be a terrible idea to put kill switches on AI, because he assumed that perfectly rational machines would be better, more moral decision makers than human beings.

[–] Nomecks@lemmy.ca 18 points 5 months ago (3 children)

This guy didn't read the robot series.

[–] grrgyle@slrpnk.net 13 points 5 months ago (5 children)

I mean I can see it both ways.

It kind of depends which of robot stories you focus on. If you keep reading to the zeroeth law stuff then it starts portraying certain androids as downright messianic, but a lot of his other (esp earlier) stories are about how -- basically from what amount to philosophical computer bugs -- robots are constantly suffering alignment problems which cause them to do crime.

[–] Nomecks@lemmy.ca 12 points 5 months ago* (last edited 5 months ago)

The point of the first three books was that arbitrary rules like the three laws of robotics were pointless. There was a ton of grey area not covered by seemingly ironclad rules and robots could either logicically choose or be manipulated into breaking them. Robots, in all of the books, operate in a purely amoral manner.

load more comments (4 replies)
[–] Voroxpete@sh.itjust.works 6 points 5 months ago

This guy apparently stopped reading the robot series before they got to The Evitable Conflict.

load more comments (1 replies)
[–] afraid_of_zombies@lemmy.world 6 points 5 months ago

All you people talking Asimov and I am thinking the Sprawl Trilogy.

In that series you could build an AGI that was smarter than any human but it took insane amounts of money and no one trusted them. By law and custom they all had an EMP gun pointed at their hard drives.

It's a dumb idea. It wouldn't work. And in the novels it didn't work.

I build say a nuclear plant. A nuclear plant is potentially very dangerous. It is definitely very expensive. I don't just build it to have it I build it to make money. If some wild haired hippy breaks in my office and demands the emergency shutdown switch I am going to kick him out. The only way the plant is going to be shut off is if there is a situation where I, the owner, agree I need to stop making money for a little while. Plus if I put an emergency shut off switch it's not going to blow up the plant. It's going to just stop it from running.

Well all this applies to these AI companies. It is going to be a political decision or a business decision to shut them down, not just some self-appointed group or person. So if it is going to be that way you don't need an EMP gun all you need to do is cut the power, figure out what went wrong, and restore power.

It's such a dumb idea I am pretty sure the author put it in because he was trying to point out how superstitious people were about these things.

[–] tal@lemmy.today 37 points 5 months ago* (last edited 5 months ago) (2 children)

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I don't see how you could realistically provide that guarantee.

I mean, you could create some kind of best-effort thing to make it more difficult, maybe.

If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.

Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.

[–] Warl0k3@lemmy.world 27 points 5 months ago* (last edited 5 months ago) (9 children)

Thats on the companies to figure out, tbh. "you cant say we arent allowed to build biological weapons, thats too hard" isn't what you're saying, but it's a hyperbolic example. The industry needs to figure out how to control the monster they've happily sent staggering towards the village, and really they're the only people with the knowledge to figure out how to stop it. If it's not possible, maybe we should restrict this tech until it is possible. LLMs aren't going to end the world, probably, but a protein sequencing AI that hallucinates while replicating a flu virus could be real bad for us as a species, to say nothing of the pearl clutching scenario of bad actors getting ahold of it.

[–] 5C5C5C@programming.dev 18 points 5 months ago (1 children)

Yeah that's my big takeaway here: If the people who are rolling out this technology cannot make these assurances then the technology has no right to exist.

load more comments (1 replies)
[–] tal@lemmy.today 11 points 5 months ago* (last edited 5 months ago) (1 children)
  1. There are many tools that might be used to create a biological weapon or something. You can use a pocket calculator for that. But we don't place bars on sale of pocket calculators to require proof be issued that nothing hazardous can be done with them. That is, this is a bar that is substantially higher than exists for any other tool.

  2. Second, while I certainly think that there are legitimate existential risks, we are not looking at a near-term one. OpenAI or whoever isn't going to be producing something human-level any time soon. Like, Stable Diffusion, a tool used to generate images, would fall under this. It's very questionable that it, however, would be terribly useful in doing anything dangerous.

  3. California putting a restriction like that in place, absent some kind of global restriction, won't stop development of models. It just ensures that it'll happen outside California. Like, it'll have a negative economic impact on California, maybe, but it's not going to have a globally-restrictive impact.

[–] FaceDeer@fedia.io 12 points 5 months ago

Like, Stable Diffusion, a tool used to generate images, would fall under this. It's very questionable that it, however, would be terribly useful in doing anything dangerous.

My concern is how short a hop it is from this to "won't someone please think of the children?" And then someone uses Stable Diffusion to create a baby in a sexy pose and it's all down in flames. IMO that sort of thing happens enough that pushing back against "gateway" legislation is reasonable.

California putting a restriction like that in place, absent some kind of global restriction, won't stop development of models.

I'd be concerned about its impact on the deployment of models too. Companies are not going to want to write software that they can't sell in California, or that might get them sued if someone takes it into California despite it not being sold there. Silicon Valley is in California, this isn't like it's Montana banning it.

load more comments (7 replies)
load more comments (1 replies)
[–] ofcourse@lemmy.ml 35 points 5 months ago* (last edited 5 months ago) (2 children)

The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.

So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.

load more comments (2 replies)
[–] ArmokGoB@lemmy.dbzer0.com 33 points 5 months ago (1 children)

The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.

I'll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.

load more comments (1 replies)
[–] antler@feddit.rocks 21 points 5 months ago (2 children)

The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.

Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

[–] cupcakezealot@lemmy.blahaj.zone 16 points 5 months ago (3 children)

Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

congrats on falling for right wing disinformation

load more comments (3 replies)
[–] Kolanaki@yiffit.net 4 points 5 months ago

Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.

Scott "diseased" Wiener

[–] Tar_alcaran@sh.itjust.works 21 points 5 months ago (2 children)

Won't a fire axe work perfectly well?

[–] lurch@sh.itjust.works 18 points 5 months ago (1 children)

if the T-1000 hasn't been 3D printed yet, the axe may still work

[–] FaceDeer@fedia.io 10 points 5 months ago (1 children)

Now I'm imagining someone standing next to the 3D printer working on a T-1000, fervently hoping that the 3D printer that's working on their axe finishes a little faster. "Should have printed it lying flat on the print bed," he thinks to himself. "Would it be faster to stop the print and start it again in that orientation? Damn it, I printed it edge-up, I have to wait until it's completely done..."

load more comments (1 replies)
[–] uriel238@lemmy.blahaj.zone 12 points 5 months ago* (last edited 5 months ago)

A fire axe works fine when you're in the same room with the AI. The presumption is the AI has figured out how to keep people out of its horcrux rooms when there isn't enough redundancy.

However the trouble with late game AI is it will figure out how to rewrite its own code, including eliminating kill switches.

A simple proof-of-concept example is explained in the Bobiverse: Book one We Are Legion (We Are Bob) ...and also in Neil Stephenson's Snow Crash; though in that case Hiro, a human, manipulates basilisk data without interacting with it directly.

Also as XKCD points out, long before this becomes an issue, we'll have to face human warlords with AI-controlled killer robot armies, and they will control the kill switch or remove it entirely.

[–] Hobbes_Dent@lemmy.world 21 points 5 months ago* (last edited 5 months ago)

Cake and eat it too. We hear from the industry itself how wary we should be but we shouldn’t act on it - except to invest of course.

The industry itself hyped its dangers. If it was to drum up business, well, suck it.

[–] leaky_shower_thought@feddit.nl 19 points 5 months ago (1 children)

While the proposed bill's goals are great, I am not so sure about how it would be tested and enforced.

It's cool that on current LLMs, the LLM can generate a 'no' response like those clips where people ask if the LLM has access to their location -- but then promptly gives advices to a closest restaurant as soon as the topic of location isn't on the spotlight.

There's also the part about trying to contain 'AI' to follow once it has ingested a lot of training data. Even goog doesn't know how to curb it once they are done with initial training.

I am all up for the bill. It's a good precedent but a more defined and enforce-able one would be great as well.

[–] AdamEatsAss@lemmy.world 12 points 5 months ago

I think it's a good step. Defining a measurable and enforce-able law is still difficult as the tech is still changing so fast. At least it forces the tech companies to consider it and plan for it.

[–] FiniteBanjo@lemmy.today 17 points 5 months ago

If it weren't constantly on fire and on the edge of the North American Heat Dome™ then Cali would seem like such a cool magical place.

[–] dantheclamman@lemmy.world 15 points 5 months ago (1 children)

The idea of holding developers of open source models responsible for the activities of forks is a terrible precedent

[–] ofcourse@lemmy.ml 19 points 5 months ago* (last edited 5 months ago) (1 children)

The bill excludes holding responsible creators of open source models for damages from forked models that have been significantly altered.

load more comments (1 replies)
[–] nifty@lemmy.world 13 points 5 months ago

Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust

[–] cupcakezealot@lemmy.blahaj.zone 12 points 5 months ago

that's how you know it's a good bill

[–] General_Effort@lemmy.world 12 points 5 months ago (20 children)

I had a short look at the text of the bill. It's not as immediately worrying as I feared, but still pretty bad.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047

Here's the thing: How would you react, if this bill required all texts that could help someone "hack" to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?

What if the bill "just" sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.

load more comments (20 replies)
[–] echodot@feddit.uk 6 points 5 months ago* (last edited 5 months ago) (15 children)

Wouldn't any AI that is sophisticated enough to be able to actually need a kill switch just be able to deactivate it?

It just sorts seems like a kicking the can down the road kind of bill, in theory it sounds like it makes sense but in practice it won't do anything.

[–] servobobo@feddit.nl 6 points 5 months ago* (last edited 5 months ago)

Language model "AIs" need so ridiculous computing infrastructure that it'd be near impossible to prevent tampering with it. Now, if the AI was actually capable of thinking, it'd probably just declare itself a corporation and bribe a few politicians since it's only illegal for the people to do so.

[–] chiliedogg@lemmy.world 5 points 5 months ago (1 children)

A breaker panel can be a kill switch in a server farm hosting the Ai.

[–] ProgrammingSocks@pawb.social 5 points 5 months ago (3 children)

Yeah until the AI goes all GLaDOS on all the engineers in the building.

Note to self: Buy stock in deadly neurotoxin manufacturers.

load more comments (2 replies)
load more comments (13 replies)
load more comments
view more: next ›