this post was submitted on 04 Sep 2024
34 points (87.0% liked)
RetroGaming
19512 readers
140 users here now
Vintage gaming community.
Rules:
- Be kind.
- No spam or soliciting for money.
- No racism or other bigotry allowed.
- Obviously nothing illegal.
If you see these please report them.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I remember during the pandemic lockdown, I would watch on youtube a live stream of a programing trying to learn to play the NES game Super Mario Bros. It took 4 days for it to learn to go right.
It was the kind of stream that you would watch for 2-3 minutes at a time. It live streamed 24/7 for months. You'd pop in once or twice a day, learn what the computer learned, and after a few minutes you saw it making the same mistakes time after time after time.
But the next day, you'd come in and find that he made it past the pitt he was falling down yesterday.
I was there, after 9 days nonstop playing, he finally beat 1-1 for the first time! Everybody celebrated so hard for about 5 seconds. Then on 1-2 he immediately ran left, and died to the first goomba without any attempt to dodge.
I can only imagine this AI will be just as boring to watch for long periods, but exciting to watch over the coarse of weeks/months.
I can't help but think that if the "AI" performed that poorly on a new level, it wasn't just learning the wrong things but the entire approach wasn't right.
The "learning" isn't the same kind of learning that humans do. There is no abstraction or meta layer, only whether or not a sequence of inputs achieved an output deemed successful by a human. Programs like these interact with the game, essentially, as one static screen shot at a time. For any given configuration, the input that is most likely to result in success (based on prior experience in the form of training) is reinforced so it becomes more likely, a bit like training a dog. Except a dog knows what a ball is.
This is similar to how Google's Go models worked. For any given configuration, a set of probabilities are generated based on the weights in the model, which are based on the training (initial values are arbitrary). The main difference is that Google could simulate zillions of AI vs. AI games at a high rate of speed. Anything with a live stream attached is mainly for entertainment value and subscriber count, otherwise you would have the game run at 1,000x speed so the computer could actually train faster.
But the side effect of this kind of training is that each level is a new experience. This is somewhat analogous to how infants learn to avoid holes while crawling, but then have to relearn that when they begin walking.
Yes but if it's first instinct is "go left" on 1-2, it's pretty apparent the reward function could use some tuning