this post was submitted on 21 Oct 2023
503 points (96.0% liked)

Technology

59314 readers
4603 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

‘Nothing is changing’ — Reddit is denying a report from The Washington Post that it might force users to log in to see content if it can’t reach deals with AI companies::Reddit initially denied a report from The Washington Post that it might force users to log in to see content. However, the Post says it may still block search crawlers, and Reddit didn’t deny to The Verge that it may do so.

you are viewing a single comment's thread
view the rest of the comments
[–] Pika@sh.itjust.works 34 points 1 year ago (3 children)

people talk about how big AI is but, It'll crash like everything else as enshittification hits. I tried to use Bing AI the other day for the first time in a few months, it didn't even let me do more then a handful of entries before locking me out saying I used too many queries in 24h. How is that supposed to be helpful to a consumer as a valid feature of you lock it down.

[–] spark947@lemm.ee 11 points 1 year ago (4 children)

Yeah, hasn't anyone else noticed that there hasn't been a single profitable product to come out of it? Even copilot is biting the dust already as they try to reduce computing costs. I also haven't heard of a single person actually paying for chatgpt access either...

[–] zecg@lemmy.world 6 points 1 year ago

I paid 20 bucks plus VAT last week for a month of access to try the new dalle in chatgpt, it has just dropped.

[–] stockRot@lemmy.world 5 points 1 year ago (2 children)

I work in healthcare tech and can guarantee there are exciting things coming down the pipeline in that domain

[–] spark947@lemm.ee 2 points 1 year ago (1 children)

Lol. I will believe it when I see it. I don't think LLMs especially will do that much good in Healthcare, and I would be particulary wary of them diagnosing patients. Aside from some very limited signal analysis for telehealth, I am very wary on the applications of "new" AI on healthcare. I believe it will be a disaster.

[–] stockRot@lemmy.world 1 points 1 year ago (1 children)

Unless you're a medical professional, you probably won't see it.

[–] spark947@lemm.ee 1 points 1 year ago

8f your in enterprise software, this stuff is pretty malleable. You are regularly asked to give pitches and lectures on medical projects, for sales reasons. You would be surprised - most people that work on this stuff have no idea the fist thing about medicine.

My Mom's a doctor, so I can ask her to have a bit of insight about this stuff. The challenges facing healthcare don't have that much to do with technology, at least in the US.

[–] KinglyWeevil@lemmy.dbzer0.com 1 points 1 year ago (1 children)
[–] stockRot@lemmy.world 1 points 1 year ago (2 children)

We're largely still working with LLMs at the moment -- Using them to immediately pull in relevant clinical information from previous encounters when a doctor sees a patient. Or using generative AI to edit doctors' messages to patients be more empathetic and... human (our pilot organizations have really loved this one so far). Using procedure codes on claims to guess if certain diagnoses were missed and to make more robust health risk profiles for populations as a whole -- these are a bit more NLP/data mining.

[–] spark947@lemm.ee 2 points 1 year ago

Much of this seems like a bad idea imo.

[–] azulavoir@sh.itjust.works 1 points 1 year ago

Hey, same (or at least similar)

[–] bamboo@lemm.ee 1 points 1 year ago (1 children)

I and many of my coworkers pay for ChatGPT. It’s super useful at work and can be used to save a considerable amount of time.

[–] spark947@lemm.ee 1 points 1 year ago

I'm in all the business meetings. "Use Chatham to save time generating analysis" or something like that. I think it has been proven that merely using it as a tool to generate content isn't profitable- at some level even your paid subscription is subsidized by VC money. The real test is if it provides "valuable" content. But then why does your employer even need you to make the prompts? Don't worry, I believe LLMs are fundamentally incapable of this and that your job is safe.

[–] glockenspiel@programming.dev 6 points 1 year ago (1 children)

Those companies learned their lesson from search engines. They gave it away for free for far too long and with too few strings attached. It became impossible to realistically gate features and charge for them.

But chatbots, on the other hand, just need a little big money razzle dazzle and, boom, now it is AI and people are conditioned to accept any limits thrown at them.

[–] spark947@lemm.ee 5 points 1 year ago

Google made a very profitable ad business fromt heir search engine.

[–] KinglyWeevil@lemmy.dbzer0.com 4 points 1 year ago

I get why some limits are necessary, a company doesn't want a Microsoft Tae repeat...but after Chat GPT was made available to the public, the rapid addition of a whole range of guardrails made it nearly immediately unusable.

You ask it about anything which is controversial in the slightest regard and it shuts down, which for me at least, removes any interest in using it.