this post was submitted on 03 Jun 2024
93 points (100.0% liked)

Technology

37800 readers
455 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

You might know Robert Miles from his appearances in Computerphile. When it comes to AI safety, his videos are the best explainers out there. In this video, he talks about the developments of the past year (since his last video) and how AI safety plays into it.

For example, he shows how GPT 4 shows understanding of "theory of other minds" where GPT 3.5 did not. This is where the AI can keep track of what other people know and don't know. He explains the Sally-Anne test used to show this.

He covers an experiment where GPT-4 used TaskRabbit to get a human to complete a CAPTCHA, and when the human questioned whether it was actually a robot, GPT-4 decided to lie and said that it needs help because it's blind.

He talks about how many researchers, including high-profile ones, are trying to slow down or stop the development of AI models until the safety research can catch up and ensure that the risks associated with it are mitigated.

And he talks about how suddenly what he's been doing became really important, where before it was mostly a fun and interesting hobby. He now has an influential role in how this plays out and he talks about how scary that is.

If you're interested at all in this topic, I can't recommend this video enough.

top 17 comments
sorted by: hot top controversial new old
[–] NeatNit@discuss.tchncs.de 23 points 6 months ago (1 children)

Reposted because someone else's post was removed after I took issue with its AI-generated summary. If you're reading this, I didn't mean for this to happen, I hope you're not too angry. I actually would have preferred if you just edited your summary to correct it. And FWIW, I upvoted your post.

[–] sabreW4K3@lazysoci.al 22 points 6 months ago (2 children)

Not an issue, I posted for the discussion, people not discussing the actual video isn't fun nor good for Lemmy. If you can do it justice, I'll celebrate that.

[–] NeatNit@discuss.tchncs.de 12 points 6 months ago

We'll see if my efforts fare any better.

[–] NeatNit@discuss.tchncs.de 11 points 6 months ago

Also feel free to cross-post this to the other community, or anywhere else.

[–] scrchngwsl@feddit.uk 18 points 6 months ago* (last edited 6 months ago)

I’ve followed Robert Miles’ YouTube channel for years and watched his old numberphile videos before that. He’s a great communicator and a genuinely thoughtful guy. I think he’s overly keen on anthropomorphising what AI is doing, partly because it makes it easier to communicate, but also because I think it suits the field of research he’s dedicated himself to. In this particular video, he ascribes a “theory of mind” based on the LLM’s response to a traditional and well-known theory of mind test. The test is included in the training data, and ChatGPT3.5 successfully recognises it and responds correctly. However, when the details of the test (i.e. specific names, items, etc.) are changed, but the form of the problem is the same, ChatGPT3.5 fails. ChatGPT 4, however, still succeeds – which Miles concludes means that ChatGPT 4 has a stronger theory of mind.

My view is that this is obviously wrong. I mean, just prima facie absurd. ChatGPT3.5 correctly recognises the problem as a classic psychology question, and responds with the standard psychology answer. Miles says that the test is found in the training data. So it’s in ChatGPT4’s training data, too. And ChatGPT 4’s LLM is good enough that, even if you change the nouns used in the problem, it is still able to recognise that the problem is the same one found in its training data. That does not in any way prove it has a theory of mind! It just proves that the problem is in its training set! If 3.5 doesn’t have a theory of mind because a small change can break the link between training set and test set, how can 4.0 have a theory of mind, if 4.0 is doing the same thing that 3.5 is doing, just with the link intact?

The most obvious problem is that the theory of mind test is designed for determining whether children have developed a theory of mind yet. That is, they test whether the development of the human brain has reached a stage that is common among other human brains, in which they can correctly understand that other people may have different internal mental states. We know that humans are, generally, capable of doing this, that this understanding is developed during childhood years, and that some children develop it sooner than others. So we have devised a test to distinguish between those children who have developed this capability and those children who have not yet.

It would be absurd to apply the same test to anything other than a human child. It would be like giving the LLM the “mirror test” for animal self-awareness. Clearly, since the LLM cannot recognise itself in a mirror, it is not self-aware. Is that a reasonable conclusion too? I won't go too hard on this, because it's a small part of a much wider point, and I'm sure if you pushed him on this, he would agree that LLMs don't actually have a theory of mind, they merely regurgitate the answer correctly (many animals can be similarly trained to pass theory of mind tests by rewarding them for pecking/tapping/barking etc at the right answer).

Indeed, Miles’ substantial point is that the “overton window” for AI Safety has shifted, bringing it into the mainstream of tech and political discourse. To that extent, it doesn’t matter whether ChatGPT has consciousness or not, or a theory of mind, as long as enough people in mainstream tech and political discourse believe it does for it to warrant greater attention on AI Safety. Miles further believes that AI Safety is important in its own right, so perhaps he doesn’t mind whether or not the overton window has shifted on the basis of AI's true capability or its imagined capability. He hints at, but doesn’t really explore, the ulterior motives for large tech companies to suggest that the tools they are developing are so powerful that they might destroy the world. (He doesn’t even say it as explicitly as I did just then, which I think is a failing.) But maybe that’s ok for him, as long as AI Safety research is being taken seriously.

I disagree. It would be better to base policy on things that are true, and if you have to believe that LLMs have a theory of mind in order to gain mainstream attention on AI Safety, then I think this will lead us to bad policymaking. It will miss the real harms that AI pose – facial recognition used to bar people from shops that have a disproportionately high error rate for black people, resumé scanners and other hiring tools that, again, disproportionately discriminate against black people and other minorities, non-consensual AI porn, etc etc. We may well need policies to regulate this stuff, but focus on hypothetical existential risk of AGI in the future, over the very real and present harms that AI is doing right now, is misguided and dangerous.

If policymakers actually understood the tech and the risks even to the extent that Miles's YouTube viewers did, maybe they'd come to the same conclusion that he does about the risk of AGI, and would be able to balance the imperative to act against all of the other things that the government should be prioritising. But, call me a sceptic, but I do not believe that politicians actually get any of this at all, and they just like being on stage with Elon Musk...

[–] sabreW4K3@lazysoci.al 15 points 6 months ago (2 children)

So the bit I thought was really interesting was the bit about Chat GPT hiring someone from TaskRabbit and then lying about being a computer. I'm not doing it justice obviously. But it's super fascinating and scary.

[–] Even_Adder@lemmy.dbzer0.com 14 points 6 months ago

I remember the testers telling it to hire someone. It didn't come up with that plan on its own.

[–] NeatNit@discuss.tchncs.de 7 points 6 months ago

I'll try to add that in. It's actually a fairly old story (in AI timescale) but you're right, it's worth mentioning.

[–] rufus@discuss.tchncs.de 14 points 6 months ago* (last edited 6 months ago) (1 children)

And maybe have a look at his Youtube channel and the older videos, too. Lots of them are a bit more philosophical and not too technical for the average person. I think he's quite inspiring and conveys very well what AI safety is about, and what kinds of problems that field of science is concerned with.

[–] Ilandar@aussie.zone 10 points 6 months ago (2 children)

I was surprised to see how early he began this project. 7 years of uploads on the topic, way before it became a mainstream concern.

[–] rufus@discuss.tchncs.de 7 points 6 months ago* (last edited 6 months ago) (1 children)

I'm pretty sure he did this out of this own motivation because he thinks/thought it's a fascinating topic. So, sure this doesn't align with popularity. But it's remarkable anyways, you're right. And I always like to watch the progression. As far as I remember the early videos lacked professional audio and video standards that are nowadays the norm on Youtube. At some point he must have bought better equipment, but his content has been compelling since the start of his Youtube 'career'. 😊

And I quite like the science content on Youtube. There are lots of people making really good videos, both from professional video producers and also from scientists (or hobbyists) who just share their insight and interesting perspective.

[–] Ilandar@aussie.zone 3 points 6 months ago

Agreed. There's lots of great stuff on YouTube if you take the time to do a bit of searching and curating.

[–] BubbleMonkey@slrpnk.net 2 points 6 months ago* (last edited 6 months ago)

It’s mind blowing to learn that AI/neural nets and the like have been in the works since the 80s.. it wasn’t what we know now, but like deep blue, the computer program that won at chess, started development in 1985 and won in 1997 against the world champion (Gary Kasperov). Watson, the jeopardy-playing program, was in the early 2000s.

It’s taken a long time to get from there to the mess we have now, and now it’s all super rush rush.. like chill, slow down and do it right.

[–] DarkNightoftheSoul@mander.xyz 11 points 6 months ago

I love Miles' work. Hes one of the few people of approximately my generation speaking openly and directly about the actual, observed risks involved in AI without hype or drama, and that in a very accessible way.

[–] overload@sopuli.xyz 6 points 6 months ago

Started watching today, it's just really nice to see Robert Miles make an upload after so long.

[–] KeenFlame@feddit.nu 4 points 6 months ago

I watched his videos way before the ai explosion and it seemed so far fetched and paranoid. Now, not so much

[–] tournesol_bot@jlai.lu 1 points 5 months ago

This video is highly recommended by Tournesol community:
[43🌻] Robert Miles AI Safety: AI Ruined My Year

#Tournesol is an open-source web tool made by a non profit organization, evaluating the overall quality of videos to fight against misinformation and dangerous content.