this post was submitted on 18 Sep 2023
-22 points (31.0% liked)
Asklemmy
43821 readers
885 users here now
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- !lemmy411@lemmy.ca: a community for finding communities
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You are thinking way too small about what can be done with that amount of data on you. I'll give you an example. I once did some programming work for a website. The website got 'hacked' (An administrative admin had their password guessed because they weren't using good password habits). This website had poor security and with the admins password the 'hacker' was able to get a DB dump. Bad stuff. So me and another guy set out trying to identify who had done it. Via server logs we were pretty sure we had correctly tied the 'hacker' to a user of the site. By looking at their activity on the site, and what referral links they had followed to get to the site previous, we learned where they approximately lived and their first name. But we knew we needed more info than that, so we looked at his hobbies and figured out he liked pokemon quite a lot. We then created a 'what pokemon are you' quiz, asking mostly unimportant questions, but throwing in a couple we needed in order to be able to report him to his local authorities (IE their last name and some other info I can't remember off the top of my head). We then had this quiz get posted by an account not associated with the running of the site. the 'hacker' filled it out, and we reported him for breaking the law with our evidence to his local authorities.
So to reiterate, 2 idiots with no background in data science and like 16 hour between us, were able to manipulate an arbitrary guy into doing what we wanted because of a relatively small amount of data. Now Imagine what people dedicating their lives to this stuff can do to you.
I probably should care about what big companies are doing with my data, but honestly I feel Iβll just be one more person in a group of a million. Companies wonβt care.
What Iβm scared of is stuff like the example above. A dedicated person trying to connect my online identity to my real-life one.
LMAO they really answered one of those password reset answer phishing ass quizzes? Lucky for you they were not sending their best.
Although they were the target, they were far from the only person to fill it out. Context can make people drop their guards. But yes, not some criminal mastermind. Of course, again, I'm some idiot programmer not a genius forensic computer detective.
Maybe I lack foreseeing ability but they can... guess what university I study in, what meds I take, the government could know much more if they commit themselves.
Some guy mentioned insurance, that would probably make a point, but I live in a country we don't have it
Although my anecdote ended with additional data collection, the scary part is the manipulation of action. You might think that, as an example, they see you browsing a pokemon website and therefore show you more pokemon ads, something that coule be mutually beneficial. What you should be worried about, is something like based on your browsing behaviour they figure out how to manipulate your political action, or figure out your state of mental well-being and manipulate it. There is especially horror cases here when this is algorithm driven instead of being pushed by humans. One could imagine ,and I want to preface this by saying I'm not aware of this ever having happened, a machine learning algorithm relating signs of some mental illnesses with an uptick in firearm sales, and then increasing advertising of firearms to those people. You could imagine this driving things like an increased suicide rate.
There are many more other outlets for propaganda. Their effectiveness are hard to measure as well as the effectiveness of ads. Figuring out how to manipulate is philosophically impossible. How would you train an AI if you don't know if your actions led to success or not. Mental conditions are themselves poorly understood and defined. And we only have superficial web browsing data at our disposal.