this post was submitted on 14 Feb 2024
1076 points (98.6% liked)

Technology

59314 readers
4719 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] masonlee@lemmy.world 1 points 9 months ago* (last edited 9 months ago) (1 children)

Hi! Thanks for the conversation. I’m aware of the 2022 survey referenced in the article. Notably, in only one year’s time, expected timelines have advanced significantly. Here is that survey author’s latest update: https://arxiv.org/abs/2401.02843 (click on PDF in the sidebar)

I consider Deep Learning to be new and a paradigm shift because only recently have we had the compute to prove its effectiveness. And the Transformer paradigm enabling LLMs is from 2017. I don’t know what counts as new for you. (Also I wouldn’t myself call it “programming” in the traditional sense— with neural nets we’re more “growing” AI, but you probably know this.)

If you are reading me as saying that generative AI alone scales to AGI, we are talking past each other. But I do disagree with you and think Hinton and others are correct where they show there is already some form of reasoning and understanding in these models. (See https://youtu.be/iHCeAotHZa4 for a recent Hinton talk.) I don’t doubt that additional systems will be developed to improve/add additional reasoning and planning to AI processes—and I have no reason to doubt your earlier assertion that it will be a different additional system or paradigm. We don’t know when the breakthroughs will come. Maybe it’s “Tree of Thoughts”, maybe it’s something else. Things are moving fast. (And we’re already at the point where AI is used to improve next gen AI.)

At any rate, I believe my initial point remains regardless of one’s timelines: it is the goal of the top AI labs to create AGI. To me, this is fundamentally a dangerous mission because of concerns raised in papers such as Natural Selection Favors AIs over Humans. (Not to mention the concerns raised in An Overview of Catastrophic AI Risks, many of which apply to even today’s systems.)

Cheers and wish us luck!

[–] Rhaedas@kbin.social 2 points 8 months ago

There are two dangers in the current race to get to AGI and in developing the inevitable ANI products along the way. One is that advancement and profit are the goals while the concern for AI safety and alignment in case of success has taken a back seat (if it's even considered anymore). Then there is number two - we don't even have to succeed in AGI for there to be disastrous consequences. Look at the damage early LLM usage has already done, and it's still not good enough to fool anyone who looks closely. Imagine a non-reasoning LLM able to manipulate any media well enough to be believable even with other AI testing tools. We're just getting to that point - the latest AI Explained video discussed Gemini and Sora and one of them (I think Sora) fooled some text generation testers into thinking its stories were 100% human created. In short, we don't need full general AI to end up with catastrophe, we'll easily use the "lesser" ones ourselves. Which will really fuel things if AGI comes along and sees what we've done.