iamkindasomeone

joined 1 year ago
[–] iamkindasomeone@feddit.de 0 points 4 months ago

I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.

[–] iamkindasomeone@feddit.de 1 points 4 months ago (2 children)

Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.

[–] iamkindasomeone@feddit.de 1 points 1 year ago

I thought it did though. Thats what the mods at least told.

[–] iamkindasomeone@feddit.de 4 points 1 year ago (2 children)

Well they already did that when shutting down the API. And yet they try to keep that thing alive in most subs.

[–] iamkindasomeone@feddit.de 3 points 1 year ago

Its not the same. And theres proof, why.

[–] iamkindasomeone@feddit.de 5 points 1 year ago

"I wonder what that Button does?!"

[–] iamkindasomeone@feddit.de 4 points 1 year ago

Some of them are for sure.