view the rest of the comments
No Stupid Questions
No such thing. Ask away!
!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must be legitimate questions. All post titles must include a question.
All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.
Rule 2- Your question subject cannot be illegal or NSFW material.
Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding META posts and joke questions.
Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.
On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.
If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.
Rule 7- You can't intentionally annoy, mock, or harass other members.
If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.
Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- Majority of bots aren't allowed to participate here.
Credits
Our breathtaking icon was bestowed upon us by @Cevilia!
The greatest banner of all time: by @TheOneWithTheHair!
I'm a bit disturbed how people's beliefs are literally shaped by an algorithm. Now I'm scared to watch Youtube because I might be inadvertently watching propaganda.
It's even worse than "a lot easier". Ever since the advances in ML went public, with things like Midjourney and ChatGPT, I've realized that the ML models are way way better at doing their thing that I've though.
Midjourney model's purpose is so receive text, and give out an picture. And it's really good at that, even though the dataset wasn't really that large. Same with ChatGPT.
Now, Meta has (EDIT: just a speculation, but I'm 95% sure they do) a model which receives all data they have about the user (which is A LOT), and returns what post to show to him and in what order, to maximize his time on Facebook. And it was trained for years on a live dataset of 3 billion people interacting daily with the site. That's a wet dream for any ML model. Imagine what it would be capable of even if it was only as good as ChatGPT at doing it's task - and it had uncomparably better dataset and learning opportunities.
I'm really worried for the future in this regard, because it's only a matter of time when someone with power decides that the model should not only keep people on the platform, but also to make them vote for X. And there is nothing you can do to defend against it, other than never interacting with anything with curated content, such as Google search, YT or anything Meta - because even if you know that there's a model trying to manipulate with you, the model knows - there's a lot of people like that. And he's already learning and trying how to manipulate even with people like that. After all, it has 3 billion people as test subjects.
That's why I'm extremely focused on privacy and about my data - not that I have something to hide, but I take a really really great issue with someone using such data to train models like that.
Just to let you know, meta has an open source model, llama, and it's basically state of the art for open source community, but it falls short of chatgpt4.
The nice thing about the llama branches (vicuna and wizardlm) is that you can run them locally with about 80% of chatgpt3.5 efficiency, so no one is tracking your searches/conversations.
My personal opinion is that it's one of the first large cases of misalignment in ML models. I'm 90% certain that Google and other platforms have been for years already using ML models design for user history and data they have about him as an input, and what videos should they offer to him as an ouput, with the goal to maximize the time he spends watching videos (or on Facebook, etc).
And the models eventually found out that if you radicalize someone, isolate them into a conspiracy that will make him an outsider or a nutjob, and then provide a safe space and an echo-chamber on the platform, be it both facebook or youtube, the will eventually start spending most of the time there.
I think this subject was touched-upon in the Social Dillema movie, but given what is happening in the world and how it seems that the conspiracies and desinformations are getting more and more common and people more radicalized, I'm almost certain that the algorithms are to blame.
If youtube "Algorithm" is optimizing for watchtime then the most optimal solution is to make people addicted to youtube.
The most scary thing I think is to optimize the reward is not to recommend a good video but to reprogram a human to watch as much as possible
I think that making someone addicted to youtube would be harder, than simply slowly radicalizing them into a shunned echo chamber about a conspiracy theory. Because if you try to make someone addicted to youtube, they will still have an alternative in the real world, friends and families to return to.
But if you radicalize them into something that will make them seem like a nutjob, you don't have to compete with their surroundings - the only place where they understand them is on the youtube.
100% they're using ML, and 100% it found a strategy they didn't anticipate
The scariest part of it, though, is their willingness to continue using it despite the obvious consequences.
I think misalignment is not only likely to happen (for an eventual AGI), but likely to be embraced by the entities deploying them because the consequences may not impact them. Misalignment is relative
Reason and critical thinking is all the more important in this day and age. It's just no longer taught in schools. Some simple key skills like noticing fallacies or analogous reasoning, and you will find that your view on life is far more grounded and harder to shift
Just be aware that we can ALL be manipulated, the only difference is the method. Right now, most manipulation is on a large scale. This means they focus on what works best for the masses. Unfortunately, modern advances in AI mean that automating custom manipulation is getting a lot easier. That brings us back into the firing line.
I'm personally an Aspie with a scientific background. This makes me fairly immune to a lot of manipulation tactics in widespread use. My mind doesn't react how they expect, and so it doesn't achieve the intended result. I do know however, that my own pressure points are likely particularly vulnerable. I've not had the practice resisting having them pressed.
A solid grounding gives you a good reference, but no more. As individuals, it is down to us to use that reference to resist undue manipulation.
My normal YT algorithm was ok, but shorts tries to pull me to the alt-right.
I had to block many channels to get a sane shorts algorythm.
"Do not recommend channel" really helps
Using Piped/Invidious/NewPipe/insert your preferred alternative frontend or patched client here (Youtube legal threats are empty, these are still operational) helps even more to show you only the content you have opted in to.
You watch this one thing out of curiosity, morbid curiosity, or by accident, and at the slightest poke the goddamned mindless algorithm starts throwing this shit at you.
The algorithm is "weaponized" for who screams the loudest, and I truly believe it started due to myopic incompetence/greed, not political malice. Which doesn't make it any better, as people don't know how to take care of themselves from this bombardment, but the corporations like to pretend that ~~they~~ people can, so they wash their hands for as long as they are able.
Then on top of this, the algorithm has been further weaponized by even more malicious actors who have figured out how to game the system.
That's how toxic meatheads like infowars and joe rogan get a huge bullhorn that reaches millions. "Huh... DMT experiences... sounds interesting", the format is entertaining... and before you know it, you are listening to anti-vax and qanon excrement, your mind starts to normalize the most outlandish things.
EDIT: a word, for clarity
Whenever I end up watching something from a bad channel I always delete it from my watch history, in case that affects my front page too.
Huh, I tried that. Still got recommended incel-videos for months after watching a moron "discuss" the Captain Marvel movie. Eventually went through and clicked "dont recommend this" on anything that showed on my frontpage, that helped.