this post was submitted on 17 Aug 2023
327 points (97.9% liked)

Technology

59314 readers
5725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Khalic@kbin.social 12 points 1 year ago (2 children)

"just as trustworthy as human authors" - Ok so you have no idea how these chatbots work do you?

[–] wmassingham@lemmy.world 1 points 1 year ago (2 children)

You have a lot of faith in human authors.

[–] Khalic@kbin.social 9 points 1 year ago (1 children)

Oh I do not, but the choice is: a human who might understand what happens vs a probabilistic model that is unable to understand ANYTHING

[–] monkic@kbin.social 7 points 1 year ago

LLM AI bases its responses from aggregated texts written by ... human authors, just without having any sense of context or logic or understanding of the actual words being put together.

[–] JackGreenEarth@lemm.ee 1 points 1 year ago (1 children)

I understand they are just fancy text prediction algorithms, which is probably justa as much as you do (if you are a machine learning expert, I do apologise). Still, the good ones that get their data from the internet rarely make mistakes.

[–] Khalic@kbin.social 6 points 1 year ago* (last edited 1 year ago) (1 children)

I'm not an ML expert but we've been using them for a while in neurosciences (software dev in bioinformatics). They are impressive, but have no semantics, no logics. It's just a fancy mirror. That's why, for example, world of warcraft player have been able to trick those bots into making an article about a feature that doesn't exist.

Do you really want to lose your time reading a blob of data with no coherency?

Do you really want to lose your time reading a blob of data with no coherency?

We are both on the internet, lol. And I mean it. LLMs are slightly worse than the CEO-optimized clickbaity word salad you get in most articles. Before you've found out how\where to search for direct and correct answers, it would be just the same or maybe worse. <– I found this skill a bit fascinating, that we learn to read patterns and red flags without even opening a page. I doubt it's possible to make a reliable model with that bullshit detector.