this post was submitted on 25 Oct 2024
889 points (98.7% liked)
Technology
59572 readers
3219 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Don't overestimate LLMs, it can't code and never will be. It can create templates convincingly enough and do boilerplate parts that are nonsense only sometimes, but those aren't the fun parts of the coding process anyway. In my experience, LLM isn't helping at all and I spend more time fixing it's nonsense than I would do if I don't use it at all, so I don't
As someone without a computer science background and who started learning Python for data science shortly before LLMs became mainstream, I gotta say it's been pretty useful for the learning process. I don't mean I just use it to write scripts for me but rather it can be a useful sorta of guide the way a scripted advisor mihht be in a game. Seems to me that one of the good sides of LLMs is that they can make technically dofficult fields more accessible as long as you understand its limits and know what it can and cant do._ i would never use it for any sort of subjective issue but I find it great for logical tasks. And this is not to say that's its perfect for that either but it has increased my efficiency for certain work tasks tremendously.
As someone with degrees and decades of experience, I urge you not use it for that. It's a cleverly disguised randomness machine, it will give you incorrect information that will be indistinguishable from truth because "truth" is never the criteria that it can use, but be convincing is. It will seed those untruths into you and unlearning bad practices that you picked up at the beginning might take years and cost you a career. And since you're just starting, you have no idea how to pick up bullshit from truth as long as the final result seem to work, and that's the works way to hide the bullshit from you.
The field is already very accessible for everyone who wants to learn it, the amount of guides, examples, teaching courses, very useful youtube videos with thick Indian accent is already enormous, and most of them are at least trying to self-correct, while LLM actively doesn't, in fact it's trying to do the opposite.
Best case scenario you're learning inefficiently, worst case scenario you aren't learning at all
Thank you, I will take this into consideration. It sure is tempting to use LLMs but I will always trust experts in the field over LLMs.
Yeah, the scary thing about LLMs is that by their very nature they sound convincing and it's very easy to fall into a trap, we as humans are hardwired to misconstrue the ability to talk smoothly for intelligence, and when computer started to speak with complete sentences and hold the immediate context of a conversation, we immediately started to think that we have a thinking machine and started believing it.
The worst thing is, there are legit uses for all the machine learning stuff and LLMs in particular, so we can't just throw it all out of the window, we will have to collectively adapt to this very convincing randomness machine that is just here all the time