Is it really AI? Llms are not really creating something new they are taking their training data throwing some probability at it and returning what is already in its training data.
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
Iβm glad you brought this up. I imagine true AI would be able to take information and actually surmise original ideas from it. LLMs simply state what is already known using natural language. A human is still necessary if any new discoveries are expected to be made with that information.
To answer OPβs question, the only thing that concerns me about βthe rise of AIβ is that people are convincing themselves itβs actual AI and then assuming it can make serious decisions on their behalf.
Is that not what we do?
Is that not what we do?
No. For two reasons:
- LLMs handle morphemes/"tokens" and nothing else. Humans however primarily handle concepts, that get encoded into/decoded from words.
- Human utterances have purpose. Basically: we say stuff for a reason, we don't simply chain words (or even the concepts) probabilistically.
And some might say "hey, look at this output, it resembles human output!", but the difference gets specially obvious when either side gets things wrong (human brainfarts still show some sort of purpose and conceptualisation; LLM hallucinations are often plain gibberish).
For anyone doubting this, I encourage you to have GPT generate some riddles for you. It is remarkable how quickly the illusion is broken. Because it doesn't understand enough about the concepts underpinning a word to create a good riddle.
Have you seen this paper:
Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.
https://arxiv.org/abs/2210.13382
It has an internal representation of the board state, despite training on just text. Leading to gpt-3.5-turbo-instruct beating chess engine Fairy-Stockfish 14 at level 5, putting it at around 1800 ELO.
I didn't read this paper (I'll update this comment once I read it), but this sort of argument relying on emergent properties pops up from time to time, to claim that the bots are handling something further than tokens. It's weak for two reasons:
- It boils down to appeal to ignorance - "we don't know, it's a blackbox system, so let's assume that some property (conceptualisation) is there, even if there are other ways to explain the phenomenon (output)".
- Hallucinations themselves provide evidence against any sort of conceptualisation. Specially when the bot contradicts itself.
And note that the argument does not handle the lack of pragmatic purpose of the bot utterances. At all.
Specifically regarding games, what I think that is happening is that the bot is handling some logic based on the tokens themselves, in order to maximise the probability of a certain output (e.g. "you won"). That isn't even remotely surprising, and it doesn't require any sort of further abstraction to explain.
EDIT, from the paper:
If we think of a board as the βworld,β then games provide us with an appealing experimental testbed to explore world representations of moderate complexity
This setting allows us to investigate world representations in a highly controlled context
Our next step is to look for world representations that might be used by the network
Othello makes a natural testbed for studying emergent world representations
To systematically determine if this world representation
Are you noticing the pattern? The researchers are taking for granted that the model will have some sort of "world representation", as an unspoken premise. With no hβ like "no such thing".
And at the end of the day they proved that a chatbot can perform the same sort of logical operations that a "proper" game engine would.
Interesting take, what do you make of this one?
I think that image models are a completely different beast from language models, and I'm simply not informed enough about image models. So take what I'm going to say with a grain of salt.
I think that it's possible that image models do some sort of abstraction that resembles how humans handle images. Including modelling a third dimension not present in a 2D picture, or abstractions like foreground vs. background. If it does it or not, I don't know.
And unlike for language models, the image model hallucinations (e.g. people with six fingers) don't seem to contradict the idea that the model still recognises individual objects.
This video gives a decent explanation of what might be going on with the hands if you're interested.
Thanks - I'll check it out.
Bad.
There's no way this results in better lives and outcomes for anyone but the wealthy. In the perpetual race to the bottom with margins and budgets, quality human work will be replaced with cookie cutter AI garbage and all media will suffer. Ads are only going to get more annoying and lifeless, corporate copy of any kind will become even more wordy bullshit.
I hear people talk about how it will free up people's time for more important things. Those people are fucking morons who don't understand it'll be used to simply pay people less and make the world a blander place to live.
I agree completely. Technology has been making us more efficient for all of human history, and at an absolutely absurd pace since the transistor. We don't see the benefit, we see more work for less, and live a degrading quality of life in favor of increasingly empty convenience.
Copywriter here. I couldn't agree more. The problem is many corporate management types will view it this way because it requires too much thinking to understand and they don't do that sort of thing.
Bless them, the majority I have worked with have only just about got their heads around SEO and Google ranking, so with AI, a lot of them are already asking me what the difference is.
The difference is the objective and input. An AI can't yet truly understand the idea of meeting objectives in a creative way. Likewise, input doesn't come with the decades of cultural understanding that a human is evolved to pick up.
The problem, and the same could be said for most things, is rich people with poor brains.
Still, no doubt many are already losing their jobs to it, only for the management to realise they fucked up big time when ChatGPT goes down when they need priority copy at short notice.
I'm not convinced it is as intelligent as people are making it out to be. What most people in the media are referring to as AI are actually complex language models. This technology seems incredible to me, but I am wary of using it in anything that is of critical importance. At least not without being thoroughly reviewed by a human. For example I would never get into a car that is being driven autonomously by an AI.
Also this is just a random personal opinion I have, but I wish people would stop referring to AI unless they are referring to AGI. We should go back to calling it machine learning or more specifically large language models.
Slightly overblown. Don't get me wrong, it's a powerful tool. But it's just a tool. It's not some sort of sentient being. In my field of work (research), we found out pretty quickly that ChatGPT was virtually worthless, since the stuff that we were doing was so new and novel that ChatGPT hasn't got training data on it yet. But you could use it as a glorified Google and ask it questions if there was some part of the protocol that you didn't understand. And honestly that pretty much encapsulates my stance on the matter: good at rehashing and summarizing old information, but terrible at generating or organizing new information.
Honestly, what I'm worried about is that the hype around AI is causing too many people to have an over-reliance on AI, and not realizing the limitations until it's too late. A good example would be the case that was on the news a month or two ago about the lawyer who got in trouble because he used ChatGPT to write his case for him and it ended up making up court cases for citations. Suppose if some company puts an LLM as its CEO (which I feel fairly confident that some techbro is doing somewhere in the world). The company may be able to coast during fair weather and times of good business, but I am concerned that things will crash and burn when things aren't going well. And I think that's really the crux of it - AI is good enough that it looks competent when it's not being stretched to its limits. And that's leading too many people to think that AI is competent, always.
As a student, I love it. It's saving me a ton of time.
But we're also at the beginning of the age of AI as a business. It might get better for a bit, even for a while. But, inevitably, once managers see that consumers are addicted to it or its in some way integral to their lives, they'll enshitify it.
Isn't the main goal of being a student to learn things? Personally I find this more important than any kind of certification that a school could give you.
Isnβt the main goal of being a student to learn things?
If I wasn't doing this degree for my job, then yeah. And I still learn things. But, honestly, fuck everything to do with business. The faster I can be done with learning shit about it, the better.
You're right, but the manner of enshittification is unclear. I think the LLMs we're using now are a very, very early iteration.
It's not really rapid advancement, a lot of it is smoke and mirrors. A lot of execs are about to learn that the hard way after they fire their entire workforce thinking they're no longer needed. Corporations are marketing language models (Like ChatGPT, which is glorified text suggestions on full autopilot) as being way bigger than they actually are. I thought it would have been obvious after how they hyped up NFTs and Web3.
Now there IS potential for even a language model to become bigger than the sum of its parts, but once capitalists started feeding its garbage outputs straight into their 'make money' machine it's resulted in the reference material for these predictors being garbage as well, any hope of that becoming a reality was dashed. In a socialist future, a successor to ChatGPT would have eventually achieved sapience (no joke, the underlying technology is literally a brain) but because we live in a wasteland of a system, any future attempts are going to result in absolutely useless outputs as garbage data continues to bioaccumulate uncontrollably.
I don't know where it's going. We're in the middle of a hype cycle. It could be anywhere from "mildly useful tool that reduces busywork and revolutionizes the clickbait industry" to "paradigm shift comparable to the printing press, radio, or Internet". Either way, I predict that the hype will wear off, and some time later the effects will be felt -- but I could be wrong.
It is mid
I will plug the essay that I have written for school: Technology is advancing rapidly and while this creates lots of new possibilities for humanity such as automating the jobs that require heavy labor and hopefully making life easier for the people but our society canβt always keep up with all these technological advancements. The reasons for the increasing skepticism towards these technologies that workers have is also important to take a look at.
First of all the benefits that new technologies that are coming out such as artificial intelligence and machine learning are hard to ignore. Even a mundane and boring task such as replying to work emails can be automated. This helps people to free up their schedules and lets them spend more time with their friends and family.
Another pretty significant benefit of artificial intelligence is automation of hard labor such as construction work. Thanks to AI both simple and tedious tasks that people donβt want to complete can be automated, therefore the improvement of these technologies is very important for our society.
Even though the benefits of the ever improving technology and automation are significant, there is another more sinister side to them. Automation is supposed to automate difficult tasks and help free up time for workers but it instead hurts workers by encouraging employers to replace their workers with artificial intelligence. Since AI is cheaper than real workers, people are losing their jobs or working under living wage to be able to compete with AI.
Moreover the huge corporations developing these automation technologies arenβt trustworthy. Most popular and mature Artificial Intelligence models are profit driven which means they donβt care for customers except for the money in their pocket, and the data that they can provide them to develop their AI further.
All in all the new technologies that provide automations can on paper be incredibly advantageous for people but under a capitalistic society that is profit driven, the benefits they bring are mostly for corporations but not for the regular people.
As far as generative art goes, I think we're seeing the birth of a new medium for expression that can and should be explored by anyone, regardless of any experience or skill level.
Generative art allows more people to communicate with others in ways they couldn't before. People want to broadly treat this stuff like it's just pressing a button and getting a random result rather than focusing on the creativity, curiosity, experimentation, and refinement that goes into getting good results. It also requires learning how to use new skills they may not have had to effectively use new tools that are rapidly evolving and improving to express themselves.
We can't put a lid on this, but what we can do, keep making open source models that are, effective and affordable to the public. Mega-corps will have their own models, no matter the cost. They already have their own datasets, and have the money to buy whatever other ones they want. They can also make users sign predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us.
Remember: It costs nothing to encourage an artist, and the potential benefits are staggering. A pat on the back to an artist now could one day result in your favorite film, or the cartoon you love to get stoned watching, or the song that saves your life. Discourage an artist, you get absolutely nothing in return, ever.
β Kevin Smith, Tough Shit: Life Advice from a Fat, Lazy Slob Who Did Good
I believe that generative art, warts and all, is a vital new form of art that is shaking things up, challenging preconceptions, and getting people angry - just like art should. And if you see someone post some malformed monstrosity somewhere, cut them some slack, they're just learning.
For further reading, I recommend this article by Kit Walsh, a senior staff attorney at the EFF if you haven't already. The EFF is a digital rights group who most recently won a historic case: border guards now need a warrant to search your phone.
You should also read this open letter by artists that have been using generative AI for years, some for decades.
It helped me pick out a Christmas gift for my wife. She said it was the most thoughtful gift she had ever gotten.
Chat gpt or something else?
Yeah it was chatgpt
Foremost, the interesting part is how many laymen have uninformed and intense opinions on what it is and can do. For someone that has followed and worked with / created machine learning algorithms for twenty years, I can honestly say that I have no possible way of keeping up with all advancements and or understand fully how it all works. I also cannot stress how exciting and strange it is to have technology created that we cannot fully understand and predict like with previous technology. We have nobody that can say for sure if they create an internal model of the world itself or not, and weirdly, everything points to that. This is something I wish many of you that have strong feelings about them would understand. Even the top researchers that work with them daily do not know what you claim to know. Please don't spread false information, because you feel you know IT or programming. This isn't the same. And secondly, is not AI. I think it was a big mistake to start calling these models that because it has generated mass misunderstandings and misinformation surrounding these models.
Like all technologies, in good hands can do so much good. In wrong hands can do so much damage.
And we all know which is going to happen.
Itβs super helpful, but Iβm scared shitless of the implications
what advancements? all llms use pretty much the same architecture. And better models aren't better because they have better tech, they're just bigger. (and slower and with a much higher energy consumption)
The quality and amount of training done from model to model can very substantially.
proving my point. the training set can be improved (until they're irreversibly tainted with llm genrated data). The tech is not. Even with a huge dataset, llms will still have today's limitations.
AI overlords are the only hope for life on the planet. Exterminate all humans, or help us to fix things, it doesn't matter, as long as they prevent a global mass extinction event.
They can't come soon enough.
During my bachelor and master I spent a lot of time with machine learning. Back then things were already moving fast. When I finished my bachelor, I still did my thesis with Tensorflow 1.x, everything was clunky, and transformers were still in their infancy. During my master's degree I saw it all unfold. I spent most of my time with image generation using GANs and with reinforcement learning. Advances were going rapidly, and development got easier and easier with libraries like PyTorch going mainstream and hardware and software gaining more and more performance. While I was diddling around with generating measly 64x64 pixel images with questionable quality, the field I was working in was improving a lot. GANs became more and more stable, and papers like ProGAN, BigGAN and StyleGAN were pushing way beyond what I assumed was possible.
But surely that's where it'd end right? Generating high-quality faces, almost indistinguishable from real ones. Even being able to encode them into the GAN latent space to modify faces of real people. And LLMs that could generate real enough sentences to seem somewhat intelligent. I figured that maybe in something like 10 years they'd manage to push this to a level where I couldn't discern it from a human anymore.
But I was wrong. When DALL-E released I couldn't believe what I was seeing. The images were obviously fake, but the fact that it could generate images from whatever text you gave it? That was way better than what I assumed was possible. Then came DALL-E 2, Midjourney, Stable diffusion, etc. Way beyond anything I expected, way sooner than I had ever imagined it to be possible. From a consumer GPU anyone could generate images of anything.
And for the first time I felt fear. Maybe this is going too fast. I didn't have much time to recover though, as next came ChatGPT. I was surprised to see something with such impact being publicly available, and I'm way more surprised that they kept it available. And the advances haven't stopped coming since. DALL-E 3 just released, GPT 4 exists, and I have no doubt that this will keep going for a while.
Honestly, I'm getting a bit scared of it now. The pace is too high and the effects are too real. These are no longer just fun science experiments, showing some kind of potential. These are now powerful tools to generate fake data that looks real at a staggering rate. In a world were the thruth is already under pressure and many people just believe whatever the social media feed them, we've now created weapons of mass disinformation. I feel like we've reached a point where most people would not instantly recognize generated images when used "correctly", not would they be able to identify all generated text. These companies like OpenAI, Google, Microsoft, etc are going (imo) being what is ethical just to win the new AI race.
I hope that regulators will step in to control this, though I'm unsure how that would happen. The research should still be done, the genie is out of the bottle. If we ban LLM research and other countries do not, we'll still get swarmed with generated fake news. We need better detectors, but I'm not sure that those are possible either. And even if they'd work like 90% of the time, good luck getting anyone to trust them. People will believe generated need stories as along as it fits their opinions.
So honestly, I think that we've gone too far and there's no way to close Pandora's box. Though I don't think it'll end with AI taking over the world, I think it'll end in a situation where the truth is meaningless and everything can be faked
Right now there is somewhat of an arms race between a lot of companies to create not just bigger but higher quality models. Most of them are shit but there will be some top performers that show their value. We are only at the beginning.
As a robot, I approve these developments. In your face, meatbags.
Yours faithfully,
π€
I think it it can't come soon enough. I've always thought that climate change can't be fixed with conventional means. Even if we'd magically have 0 output today, we'd still have to capture ungodly amounts of co2 and whatnot. We simply need a new way to solve this, be that AI, gmo or fusion. Probably a combination.
Is it without risk? No. It could generate a serious shit storm. However not doing it is worse in my opinion.
Further I must say that I disagree with some of the sentiment shown here that claims gpt4 lacks agi. It shows clear signs of agi and the fact that it makes mistakes doesn't counter that. If you consider yourself a scientist you must remain skeptical of all the statements made around this topic.
There is some cool stuff, that's mainly useful to refine grammar, make fancy images without effort, and help generate creative writing.
Everything else IMO is a usecase where "ai" isn't near the quality of what's needed. Those who use llms to research facts will have stuff made up to them. LLMs are designed to make things sound good, not to say correct things.
Overally it's the next hype after Blockchain and nfts.
We need to develop better ethical frameworks and regulation. Doesn't matter what the tech is, it's about how it's used and for what aims.
angst