this post was submitted on 02 Aug 2023
361 points (94.1% liked)
Technology
59207 readers
3234 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"AI" are just advanced versions of the next word function on your smartphone keyboard, and people expect coherent outputs from them smh
Seriously. People like to project forward based on how quickly this technological breakthrough came on the scene, but they don't realize that, barring a few tweaks and improvements here and there, this is it for LLMs. It's the limit of the technology.
It's not to say AI can't improve further, and I'm sure that when it does, it will skillfully integrate LLMs. And I also think artists are right to worry about the impact of AI on their fields. But I think it's a total misunderstanding of the technology to think the current technology will soon become flawless. I'm willing to bet we're currently seeing it at 95% of its ultimate capacity, and that we don't need to worry about AI writing a Hollywood blockbuster any time soon.
In other words, the next step of evolution in the field of AI will require a revolution, not further improvements to existing systems.
For free? On the internet?
After a year or two of going live?
It depends on what you'd call a revolution. Multiple instances working together, orchestrating tasks with several other instances to evaluate progress and provide feedback on possible hallucinations, connected to services such as Wolfram Alpha for accuracy.
I think the whole orchestration network of instances could functionally surpass us soon in a lot of things if they work together.
But I'd call that evolution. Revolution would indeed be a different technique that we can probably not imagine right now.
It is just that everyone now refers to LLMs when talking about AI even though it has sonmany different aspects to it. Maybe at some point there is an AI that actually understands the concepts and meanings of things. But that is not learned by unsupervised web crawling.
It is possible to get coherent output from them though. I’ve been using the ChatGPT API to successfully write ~20 page proposals. Basically give it a prior proposal, the new scope of work, and a paragraph with other info it should incorporate. It then goes through a section at a time.
The numbers and graphics need to be put in after… but the result is better than I’d get from my interns.
I’ve also been using it (google Bard mostly actually) to successfully solve coding problems.
I either need to increase the credit I giver LLM or admit that interns are mostly just LLMs.
Are you using your own application to utilize the API or something already out there? Just curious about your process for uploading and getting the output. I've used it for similar documents, but I've been using the website interface which is clunky.
Just hacked together python scripts.
Pip install openapi-core
Just FYI, I dinked around with the available plugins, and you can do something similar. But, even easier is just to enable "code interpreter" in the beta options. Then you can upload and have it scan documents and return similar results to what we are talking about here.
I recently asked it a very specific domain architecture question about whether a certain application would fit the need of a certain business application and the answer was very good and showed both a good understanding of architecture, my domain and the application.
So is your brain.
Relative complexity matters a lot, even if the underlying mechanisms are similar.
In the 1980s, Racter was released and it was only slightly less impressive than current LLMs only because it didn't have an Internet's worth of data it was trained on, but it could still write things like:
If anything, at least that's more entertaining than what modern LLMs can output.