277
Someone made a GPT-like chatbot that runs locally on Raspberry Pi, and you can too
(www.xda-developers.com)
This is a most excellent place for technology news and articles.
How fast are they with a good GPU?
Have you missed the first part where I explained that I couldn't get it to run through my GPU? I would only have a 6650 XT anyway but even that would be significantly faster than my CPU. How far I can't say exactly without experiencing it though, but I suspect with longer chats and consequently larger context sizes it would still be too slow to be really usable. Unless you're okay waiting for ages for a response.
Sorry, I'm just curious in general how fast these local LLMs are. Maybe someone else can give some rough info.