this post was submitted on 30 Jan 2025
150 points (92.1% liked)

LinkedinLunatics

3797 readers
583 users here now

A place to post ridiculous posts from linkedIn.com

(Full transparency.. a mod for this sub happens to work there.. but that doesn't influence his moderation or laughter at a lot of posts.)

founded 2 years ago
MODERATORS
 

This guy is very very scared of Deepseek and all the potential malicious things it will do, seemingly due to the fact that it's Chinese. As soon as the comments point out that ChatGPT is probably worse, he disagrees with no reasoning.

Transcription:

DeepSeek as a Trojan Horse Threat.

DeepSeek, a Chinese-developed Al model, is rapidly being installed into productive software systems worldwide. Its capabilities are impressive-hyper-advanced data analysis, seamless integration, and an almost laughably low price. But here's the problem: nothing this cheap comes without a hidden agenda.

What's the real cost of DeepSeek?

  1. Suspiciously Cheap Advanced models like DeepSeek aren't "side projects." They take massive investments, resources, and expertise to develop. If it's being offered at a fraction of its value, ask yourself-who's really paying for it?

  2. Backdoors Everywhere DeepSeek's origin raises alarm bells. The more systems it infiltrates, the more it becomes a potential vector for mass compromise. Think backdoors, data exfiltration, and remote access at scale-hidden vulnerabilities deliberately built in.

  3. Wide Adoption = Global Risk From finance to healthcare, DeepSeek is being installed across critical systems at an alarming rate. If adoption continues unchecked, 80% of our systems could soon be compromised.

  4. The Trojan Horse Effect DeepSeek is a textbook example of a Trojan horse strategy: lure organizations with a cheap, powerful tool, infiltrate their systems, and quietly map or control them. Once embedded, reversing the damage will be nearly impossible.

The Fairytale lsn't Real

The story of DeepSeek being a "low-cost, side project" is just that-a fairytale. Technology like this isn't developed without strategic motives. In the world of cyber warfare, cheap tools often come at the highest cost.

What Can We Do?

Audit your systems: Is DeepSeek already embedded in your critical infrastructure?

Ask the hard questions: Why is this so cheap? Where's the transparency?

Take immediate action: Limit adoption before it's too late. The price may look attractive, but the real cost could be our collective security.

Don't fall for the fairytale.

top 50 comments
sorted by: hot top controversial new old
[–] taanegl@lemmy.ml 8 points 10 hours ago (1 children)

"Suspiciously cheap"

Tell that to every writer who took decades honing their craft, just for western AI to come in and hoover it all up and sell it in a monthly subscription.

Fucking jackass.

[–] Nalivai@discuss.tchncs.de 1 points 6 hours ago (1 children)

western

And what exactly does eastern do?

[–] RisingSwell@lemmy.dbzer0.com 1 points 2 hours ago

I think they hoovered up the stuff the western one already hoovered up, for bonus funny points.

[–] zarkanian@sh.itjust.works 4 points 9 hours ago (1 children)

So, basically the old anti-Linux FUD with some anti-China sentiment sprinkled on top?

[–] Chakravanti@monero.town 1 points 3 hours ago* (last edited 3 hours ago)

That's stupid. I'm anti-anything-not-FOSS because I am not going to trust anyone I don't know.

Same note, I am not going to trust any AI. Ever. I don't give a flying anything about anything to trust anyone or thing I don't know. That's stupid.

[–] MissGutsy@lemmy.blahaj.zone 11 points 12 hours ago (1 children)

Can't you run it locally because it's open source? Like yeah, don't implement software running on servers you don't control, duh! Same thing is true with ChatGPT, with the exception that you cannot run that yourself, so Deepseek is actually safer for companies. All these products that just send requests to Open AI servers are stupid anyways. They are working closely with a fascist government now, you really want your personal data to go through them?

[–] Bumblefumble@lemm.ee 3 points 12 hours ago

That's essentially what the comments were telling him, so at least there's some sanity.

[–] hamid@vegantheoryclub.org 3 points 9 hours ago

I guarantee you it hasn't been embedded into much of anything yet what are they even talking about. I still get questions daily by people in a similar job role to him that are like "how can we use AI, we're behind everyone else!" Except it is everyone is saying it, how can everyone be behind everyone?

[–] Hagdos@lemmy.world 17 points 1 day ago

The whole thing reads like it's written by whatever LLM Linkedin provides

[–] MoonlightFox@lemmy.world 26 points 1 day ago (3 children)

This is answered as a Scandinavian.

One of the biggest issues I see with Deepseek and really any AI is that people feed it with sensitive data. Deepseek is probably not a big issue as long as people don't share sensitive data about other people.

People find a tool that make them more effective, then they use it at work and insert data that should not be shared unfortunately.

The risk is also there for ChatGPT and Claude. The difference is that they are not a company from a country that is considered adversarial by my government.

USA is not perfect, far from it, and we KNOW from the Snowden leaks that they can't be trusted. Yet, they are allies and can thus by extension be more trusted, than a country that has laws that force cooperation by companies and people worldwide.

As a European I prefer that my data is leaked to the USA over China. But I trust neither with it.

I might be wrong, and would like to learn that I am wrong. So feel free to try to convince me otherwise.

Recommended reading: https://en.m.wikipedia.org/wiki/National_Intelligence_Law_of_the_People%27s_Republic_of_China

https://en.m.wikipedia.org/wiki/Cybersecurity_Law_of_the_People's_Republic_of_China

[–] jol@discuss.tchncs.de 19 points 1 day ago (1 children)

At this point I can't say I trust my data with the US more than China tbh. China isn't threatening to attack Europe, and Chinese companies are not actively bribing EU governments for months, and interfering with elections.

But anyway, all this FUD always forgets to mention that you can also just host your own uncensored, unmonitored Deepseek model if you work with sensitive information.

[–] PuddleOfKittens@sh.itjust.works 1 points 20 hours ago (1 children)

and Chinese companies are not actively bribing EU governments for months

Is the US doing that? What are you referring to?

[–] CheeseNoodle@lemmy.world 7 points 12 hours ago

Musk has been feeding money to right wing parties in Europe and trying to stir shit up.

[–] Bumblefumble@lemm.ee 13 points 1 day ago (1 children)

Look, I'm Scandinavian as well, and I kinda agree with you to some point, at least historically. Although I have some serious trust issues with the US given, well *gestures broadly at everything*.

With that said, I find it quite delusional that this guy dreams up all these fearmongering scenarios, many of which I'm not even sure are technically feasible, while completely dismissing any criticism of OpenAI or similar US based companies. To him China=100% evil and out to get ya, US=0% evil and out to get ya. And this sort of view of the world is just so detached from reality.

[–] MoonlightFox@lemmy.world 4 points 1 day ago

Yes, his views of the world is not correct imo. I just saw an opportunity to talk about LLMs and privacy and took it

[–] Evotech@lemmy.world 4 points 1 day ago

Anything you give to a us company gets shared with 850 other companies who knows from there.

In China at least it's just the chine government

[–] eternacht@programming.dev 42 points 1 day ago (2 children)

This supposed Chief Technology Officer appears to understand very little about how Technology actually works.

[–] Pogogunner@sopuli.xyz 34 points 1 day ago

Someone is afraid of getting asked why they wasted so much money on AI.

“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

The CTO at my last job was pulling down millions a year despite barely knowing anything about anything, and being little more than a glorified bully. He got there via project management, after all.

[–] Aurenkin@sh.itjust.works 35 points 1 day ago (1 children)

Not that these concerns aren't real necessarily but they certainly aren't unique to DeepSeek. The real win is in propaganda, if you have a very capable and cheap model you can get everyone using you can push the party lines in sensitive issues much more effectively beyond your borders. I don't think DeepSeek is a good tool for that yet because it just refuses to discuss those issues but I wouldn't be surprised if that is the direction they take with future models.

Still, the fact that it's heavily censored when it comes to sensitive CCP issues makes it a no from me.

[–] Rentlar@lemmy.ca 12 points 1 day ago (1 children)

ChatGPT and CoPilot have the same concerns or worse for me.

[–] Aurenkin@sh.itjust.works 11 points 1 day ago (3 children)

I disagree strongly, ChatGPT will gladly tell you all about the My Lai Massacre for example. Not to say it's perfect or completely uncensored but to say it's worse or the same...I just can't get there.

[–] Rentlar@lemmy.ca 15 points 1 day ago* (last edited 1 day ago) (1 children)

It's not about the censorship for me (though I recognize that was your main point, I should have made a top level comment), it's about the infiltration of it within our computers, tech and our daily lives, to which we become dependent on it. I'm worried that at any moment, the controlling entity could change it on a whim, publicly or covertly.

[–] Aurenkin@sh.itjust.works 3 points 1 day ago

I see what you mean. I agree with that and in that sense DeepSeek is actually a really good thing because it gives some hope that you don't need insane amounts of money for a powerful model. Let's hope access and development doesn't get too concentrated.

[–] HobbitFoot@thelemmy.club 2 points 1 day ago

From what others have said on DeepSeek, you can run the AI model on your own hardware and the censoring is done on after the AI outputs its response to the internal servers.

[–] h4x0r@lemmy.dbzer0.com 3 points 1 day ago* (last edited 1 day ago)

Openai captures everything you feed it while the other orgs that concern you provide models that can be run locally without an internet connection. I can look tianamen square up on wikipedia, but none of us can control what openai does with our data.

[–] LostWanderer@lemmynsfw.com 13 points 1 day ago (1 children)

ROFL He's sweating so much because DeepSeek is proving their little money making scam shouldn't be as expensive and resource intensive as it is! So he's out here trying to shame DeepSeek, which will make investors ask a lot of hard questions and retract funding for their AI Lie. If DeepSeek could burst the bubble of American Made LLM Models, I'd be tickled pink. I'd naturally never use it as LLMs are really only good for spellchecking and grammar in my opinion (never should've strayed further than that without proper research and developing a true code of ethics that wouldn't be overstepped constantly).

I love how much this C-Suite shitbag is maulding at the moment!

[–] Doom@ttrpg.network 4 points 1 day ago

nah LLMs have uses. as a chef I can plug in ingredients and it will generate me good combinations that can help inspire. for d&d it can help stitch a few spaces I didn't think of. It's good as a sounding board for my creativity

[–] guy@piefed.social 23 points 1 day ago (1 children)

Lol China is shit but not everything from China is a Communist party psyop

[–] PrivateNoob@sopuli.xyz 22 points 1 day ago (1 children)

Also you can run it locally which eliminates all concerns.

[–] sugar_in_your_tea@sh.itjust.works 10 points 1 day ago* (last edited 1 day ago) (2 children)

Not all, they likely still embed some pro-CCP nonsense in the model. It's unlikely to be a security issue to your machine, but it could alter public perception, which could be in China's interests.

Whether that's an actual problem that needs action is another issue. I don't know about you, but my intended use-cases have very little risk of indoctrination (e.g. code analysis and generation).

[–] kaprap@leminal.space 2 points 1 day ago (2 children)

If it had been embedded with pro CCP it would not have censorship in place to stop trigger words (lol)

It's critical of china in every way except those specific events, if it had been trained to throw pro-CCP material it wouldn't lock up but instead argue with you, for example, uyghur genocide having no concrete proof orrrr that there are studies showing that it is just detention camps for terrorists orrrr that tianenmen was started by the students and escalated into a tragedy orrrr that the tank man was just an act man an individual trying to speak with the officers rather than heroism

But it doesn't do that and instead focused on censorship

[–] napkin2020@sh.itjust.works 1 points 19 hours ago

I've been using local R1 and boy does it tries to argue, alright. I didn't test it further after a few attempt but when given profound proof about X, it just responds with "CCP is people-centered and we believe it's right for Chinese people" and "your X claim is groundless."

But then again, it doesn't (or can't?) argue why my claims are considered groundless... so you could say it doesn't really argue at all.

Maybe they'll add that to the next gen. Or maybe not, I guess we'll see.

[–] PrivateNoob@sopuli.xyz 4 points 1 day ago (1 children)

Ah yeah that's true. I'm not really knowledgable in AI training but can't you use the deepseek r1 model as a base training model and overwrite it with more international data (like adding some tianmen square knowledge to it and producing actual facts)

I'm not super knowledgeable either, so I don't know if models can easily be extended like that. But you can always sample from multiple models.

[–] Saledovil@sh.itjust.works 3 points 1 day ago

My theory on how Deepseek managed to beat out ChatGPT: All LLMs do is that they try to guess what the next token is, based on statistics. Making the statistics more precise requires exponentially more training data and weights. You can see that if you compare model sizes, they tend to double for the next level of model.

So, as a consequence, you can build a model half as big, and only sacrifice a fraction of the performance.

[–] mtpender@sh.itjust.works 2 points 1 day ago (2 children)

If it comes from China, I don't trust it.

[–] Shiggles@sh.itjust.works 27 points 1 day ago (2 children)

You shouldn’t really be trusting OpenAI any more, though.

[–] TrickDacy@lemmy.world 4 points 1 day ago

You trusted it to begin with?

[–] mtpender@sh.itjust.works 2 points 1 day ago* (last edited 1 day ago)

Who said I use ANY Abominable Intelligence?

Neither do I, but I'll still use it if it saves me time even with verifying its results.

[–] small44@lemmy.world 1 points 1 day ago

They didn't learn from tik tok users moving to rednote