this post was submitted on 26 Aug 2023
401 points (85.9% liked)
Technology
59207 readers
3234 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No duh - why would it have any ability to do that sort of task?
Part of the reason for studies like this is to debunk peoples' expectations of AI's capabilities. A lot of people are under the impression that cgatGPT can do ANYTHING and can think and reason when in reality it is a bullshitter that does nothing more than mimic what it thinks a suitable answer looks like. Just like a parrot.
Because if it's able to crawl all of the science pubs, then it would be able to try different combos until it works. Isn't that how it could/is being used, to test stuff?
It doesn't check the stuff it generates other than on grammatical and orthographical errors. It's not intelligent or has knowledge outside of how to create text. The text looks useful, but it doesn't know what it contains in a way something intelligent would.
Recent papers have shown that LLMs build internal world models but about a topic as niche and complicated as cancer treatment, a chatbot based on GPT-3.5 be woefully ill-equipped to do any kind of proper reasoning.
It seems like it could check for that though, which is what chatgpt doesn't do but we all assumed would. I'm sure there are ai programs that could and do check for possibilities on only information we know to be true.
People who understand the technology did not assume that, but yes the general public has a lot of misconceptions about it.
If you want an AI that can create cancer treatment, you need to train it on creating cancer treatment, and not just use one that is trained on general knowledge. Even if you train it on science publications, all it can now reliably do is mimic a science journal since it has not been trained on how to parse the knowledge in the journal itself.
Which is exactly the problem people think has been solved but isn't anywhere near being solved. It cannot comprehend semantics, the meaning of things is completely beyond it and all other AIs.
Unfortunately saying I made a thing that creates vaguely human looking speech with little content isn't astonishing to most people hence they are looking for something useful this breakthrough machine must be able to do and then they don't find anything leading to these articles.
Right, but can't they tell it to also try thousands and thousands of combos that humans could never do? I think ChatGPT is both super amazing and as stupid as a rock at the same time. I thought the vaccine used an AI to do that. I'm obviously clueless, I'm seriously asking.
I don't know about AI, but there are already computer programs that try many different combinations of, for example, chemical structures with known pharmacological properties and then output new drugs that could possibly be used to treat something. Of course you have to verify with research and studies.
I'm sure there will be AI's or machine learning programs, if not already, that can do this as well and perhaps improve upon the process. But they would need to be specifically trained for that purpose. ChatGPT is a LLM, it's made to generate language that fits a given prompt, I would not expect it to be great at creating cancer treatments and I'm not sure why we needed a study to learn that. OpenAI tells you already that the results can be inaccurate or outright wrong.
I'm in Seattle and surrounded by people who are techy while not being techy myself, so the innovations they talk about are mind blowing. I thought ChatGPT at first was like all the other tech I heard about. But when you think about it, they would never release that for free first of all, and it would be too powerful for evil people. I was just letting people know what a non-techy thought.