this post was submitted on 21 Sep 2023
35 points (94.9% liked)
Asklemmy
43851 readers
743 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Have you seen this paper:
https://arxiv.org/abs/2210.13382
It has an internal representation of the board state, despite training on just text. Leading to gpt-3.5-turbo-instruct beating chess engine Fairy-Stockfish 14 at level 5, putting it at around 1800 ELO.
I didn't read this paper (I'll update this comment once I read it), but this sort of argument relying on emergent properties pops up from time to time, to claim that the bots are handling something further than tokens. It's weak for two reasons:
And note that the argument does not handle the lack of pragmatic purpose of the bot utterances. At all.
Specifically regarding games, what I think that is happening is that the bot is handling some logic based on the tokens themselves, in order to maximise the probability of a certain output (e.g. "you won"). That isn't even remotely surprising, and it doesn't require any sort of further abstraction to explain.
EDIT, from the paper:
Are you noticing the pattern? The researchers are taking for granted that the model will have some sort of "world representation", as an unspoken premise. With no hβ like "no such thing".
And at the end of the day they proved that a chatbot can perform the same sort of logical operations that a "proper" game engine would.
Interesting take, what do you make of this one?
I think that image models are a completely different beast from language models, and I'm simply not informed enough about image models. So take what I'm going to say with a grain of salt.
I think that it's possible that image models do some sort of abstraction that resembles how humans handle images. Including modelling a third dimension not present in a 2D picture, or abstractions like foreground vs. background. If it does it or not, I don't know.
And unlike for language models, the image model hallucinations (e.g. people with six fingers) don't seem to contradict the idea that the model still recognises individual objects.
This video gives a decent explanation of what might be going on with the hands if you're interested.
Thanks - I'll check it out.