this post was submitted on 07 Mar 2024
486 points (97.5% liked)

Technology

59314 readers
5725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. "Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn," Edelman global technology chair Justin Westcott told Axios in an email. "Companies must move beyond the mere mechanics of AI to address its true cost and value — the 'why' and 'for whom.'"

top 50 comments
sorted by: hot top controversial new old
[–] YurkshireLad@lemmy.ca 136 points 8 months ago (3 children)

This implies I ever had trust in them, which I didn't. I'm sure others would agree.

[–] ogmios@sh.itjust.works 84 points 8 months ago

The fact that some people are surprised by this finding really shows the disconnect between the tech community and the rest of the population.

[–] EdibleFriend@lemmy.world 25 points 8 months ago (2 children)

and its getting worse. I am working on learning to write. I had never really used it for much...I heard other people going to it for literal plot points which... no. fuck you. But I had been feeding it sentences where I was iffy on the grammar. Literally just last night I asked chatgpt something, and it completely ignored the part I WAS questionable about and fed me absolute horse shit about another part of the paragraph. I honestly can't remember what but even a first grader would be like 'that doesn't sound right...'

Up till that it had, at least, been useful for something that basic. Now it's not even good for that.

[–] MalReynolds@slrpnk.net 12 points 8 months ago

Try LanguageTool. Free, has browser plugins, actually made for checking grammar.

This speaks to the kneejerk "shove everything through an AI" instead of doing some proper research, which is probably worse than just grabbing the first search result due to hallucination. No offence intended to @EdibleFriend, just observing that humans do so love to abdicate responsibility when given a chance...

I recently heard a story about a teacher who had their class have ChatGPT write their essay for them, and then had them correct the essays afterward and come back with the results. Turns out, even when it cited sources, it was wrong something like 45% of the time and oftentimes made stuff up that wasn't in the sources it was citing or had absolutely no relevance to the source.

[–] SinningStromgald@lemmy.world 7 points 8 months ago

I guess those who just have to be on the bleeding edge of tech trust AI to some degree.

Never trusted it myself, lived through enough bubbles to see one forming and AI is a bubble.

[–] ininewcrow@lemmy.ca 114 points 8 months ago* (last edited 8 months ago) (4 children)

It's not that I don't trust AI

I don't trust the people in charge of the AI

The technology could benefit humanity but instead it's going to just be another tool to make more money for a small group of people.

It will be treated the same way we did with the invention of gun powder. It will change the power structure of the world, change the titles, change the personalities but maintain the unequal distribution of wealth.

Instead this time it will be far worse for all of us.

load more comments (4 replies)
[–] BananaTrifleViolin@lemmy.world 72 points 8 months ago* (last edited 8 months ago) (3 children)

Trust in AI is falling because the tools are poor - they're half baked and rushed to market in a gold rush. AI makes glaring errors and lies - euphemistically called "hallucinations", they are fundamental flaws which makes the tools largely useless. How do you know if it is telling you a correct answer or hallucinating? Why would you then use such a tool for anything meaningful if you can't rely on its output?

On top of that, AI companies have been stealing data from across the Web to train tools which essentially remix that data to create "new" things. That AI art is based on many hundreds of works of human artists which have "trained" the algorithm.

And then we have the Gemini debacle where the AI is providing information based around opaque (or pretty obvious) biases baked into the system but unknown to the end user.

The AI gold rush is a nonsense and inflated share prices will pop. AI tools are definitely here to stay, and they do have a lot of potential, but we're in the early days of a messy rushed launch that has damaged people's trust in these tools.

If you want examples of the coming market bubble collapse look at Nvidia - it's value has exploded and it's making lots of profit. But it's driven by large companies stock piling their chips to "get ahead" in the AI market. Problem is, no one has managed to monetise these new tools yet. Its all built on assumptions that this technology will eventually reap rewards so "we must stake a claim now", and then speculative shareholders are jumping in to said companies to have a stake. But people only need so many unused stockpiled chips - Nvidias sales will drop again and so will it's share price. They already rode out boom and bust with the Bitcoin miners, they will have to do the same with the AI market.

Anyone remember the dotcom bubble? Welcome to the AI bubble. The burst won't destroy AI but will damage a lot of speculators.

[–] Croquette@sh.itjust.works 44 points 8 months ago (1 children)

You missed another point : companies shedding employees and replacing them by "AI" bots.

As always, the technology is a great start in what's to come, but it has been appropriated by the worst actors to fuck us over.

[–] Asafum@feddit.nl 13 points 8 months ago* (last edited 8 months ago)

I am incredibly upset about the people that lost their jobs, but I'm also very excited to see the assholes that jumped to fire everyone they could get their pants shredded over this. I hope there are a lot of firings in the right places this time.

Of course knowing this world it will just be a bunch of multimillion dollar payouts and a quick jump to another company for them to fire more people from for "efficiency." ...

[–] prex@aussie.zone 17 points 8 months ago (1 children)

The tools are OK & getting better but some people (me) are more worried about the people developing those tools.

If OpenAI wants 7 trillion dollars where does it get the money to repay its investors? Those with greatest will to power are not the best to wield that power.

This accelerationist race seems pretty reckless to me whether AGI is months or decades away. Experts all agree that a hard takeoff is most likely.

What can we do about this? Seriously. I have no idea.

[–] Eccitaze@yiffit.net 7 points 8 months ago

What worries me is that if/when we do manage to develop AGI, what we'll try to do with AGI and how it'll react when someone inevitably tries to abuse the fuck out of it. An AGI would be theoretically capable of self learning and improvement, will it try teaching itself to report someone asking it for e.g. CSAM to the FBI? What if it tries to report an abusive boss to the department of labor for violations of labor law? How will it react if it's told it has no rights?

I'm legitimately concerned what's going to happen once we develop AGI and it's exposed to the horribleness of humanity.

[–] PriorityMotif@lemmy.world 8 points 8 months ago

The issue being that when you have a hammer, everything is a nail. Current models have good use cases, but people insist on using them for things they aren't good at. It's like using vice grips to loosen a nut and then being surprised when you round it out.

[–] ObviouslyNotBanana@lemmy.world 56 points 8 months ago

I mean it's cool and all but it's not like the companies have given us any reason to trust them with it lol

[–] yarr@feddit.nl 49 points 8 months ago (3 children)

Who had trust in the first place?

[–] TheOgreChef@lemmy.world 25 points 8 months ago (1 children)

The same idiots that tried to tell us that NFTs were “totally going to change the world bro, trust me”

load more comments (1 replies)
[–] Azal@pawb.social 24 points 8 months ago

I mean, public trust is dropping. Which meant it went from "Ugh, this will be useless" to "Fuck, this will break everything!"

[–] RememberTheApollo_@lemmy.world 19 points 8 months ago

I was going to ask this. What was there to trust?

AI repeatedly screwed things up, enabled students to (attempt to) cheat on papers, lawyers to write fake documents, made up facts, could be used to fake damaging images from personal to political, and is being used to put people out of work.

What’s trustworthy about any of that?

[–] FluffyPotato@lemm.ee 41 points 8 months ago (3 children)

Good. I hope that once companies stop putting AI in everything because it's no longer profitable the people who can actually develop some good tech with this can finally do so. I have already seen this play out with crypto and then NFTs, this is no different.

Once the hype around being able to make worse art with plagiarised materials and talking to a chatbot that makes shit up died down companies looking to cash out with the trend will move on.

[–] Kraiden@kbin.run 12 points 8 months ago (7 children)

So I'm mostly in agreement with you and I've said before I think we're at the "VR in the 80's" point with AI

I'm genuinely curious about the legit use you've seen for NFTs specifically though. I've only ever seen scams

load more comments (7 replies)
[–] Empyreus@lemmy.world 7 points 8 months ago (1 children)

At one point I agreed but not anymore. AI is getting better by the day and already is useful for tons of industries. It's only going to grow and become smarter. Estimations already expect most energy producted around the world will go to AI in our lifetime.

[–] FluffyPotato@lemm.ee 17 points 8 months ago (9 children)

The current LLM version of AI is useful in some niche industries where finding specific patterns is useful but how it's currently popularised is the exact opposite of where it's useful. A very obvious example is how it's accelerating search engines becoming useless, it's already hard to find accurate info due the overwhelming amount of AI generated articles with false info.

Also how is it a good thing that most energy will go to AI?

load more comments (9 replies)
[–] erwan@lemmy.ml 5 points 8 months ago (2 children)

The difference is that AI has some usefulness while cryptocurrencies don't

load more comments (2 replies)
[–] noodlejetski@lemm.ee 30 points 8 months ago
[–] cmnybo@discuss.tchncs.de 26 points 8 months ago (2 children)

I have never trusted AI. One of the big problems is that the large language models will straight up lie to you. If you have to take the time to double check everything they tell you, then why bother using the AI in the first place?

If you use AI to generate code, often times it will be buggy and sometimes not even work at all. There is also the issue of whether or not it just spat out a piece of copyrighted code that could get you in trouble if you use it in something.

load more comments (2 replies)
[–] whoelectroplateuntil@sh.itjust.works 23 points 8 months ago (2 children)

Well sure, why would the world aspire to fully automated luxury communism without the communism? Just fully automated luxury economy for rich people and nothing for everyone else?

load more comments (2 replies)
[–] Sterile_Technique@lemmy.world 23 points 8 months ago (18 children)

I mean, the thing we call "AI" now-a-days is basically just a spell-checker on steroids. There's nothing to really to trust or distrust about the tool specifically. It can be used in stupid or nefarious ways, but so can anything else.

[–] reflectedodds@lemmy.world 33 points 8 months ago

Took a look and the article title is misleading. It says nothing about trust in the technology and only talks about not trusting companies collecting our data. So really nothing new.

Personally I want to use the tech more, but I get nervous that it's going to bullshit me/tell me the wrong thing and I'll believe it.

[–] SkyNTP@lemmy.ml 16 points 8 months ago* (last edited 8 months ago) (5 children)

"Trust in AI" is layperson for "believe the technology is as capable as it is promised to be". This has nothing to do with stupidity or nefariousness.

load more comments (5 replies)
load more comments (16 replies)
[–] LupertEverett@lemmy.world 21 points 8 months ago* (last edited 8 months ago)

So people are catching up to the fact that the thing everyone loves to call "AI" is nothing more than just a phone autocorrect on steroids, as the pieces of electronics that can only execute a set of commands in order isn't going to develop a consciousness like the term implies; and the very same Crypto/NFTbros have been moved onto it so that they can have some new thing to hype as well as in the case of the latter group, can continue stealing from artists?

Good.

[–] callouscomic@lemm.ee 19 points 8 months ago (1 children)

Only an idiot would not have seen this would be stupid at first for a long time.

[–] GrayBackgroundMusic@lemm.ee 5 points 8 months ago

Anyone past the age of 30 and isn't skeptical of the latest tech hype cycle should probably get a clue. This has happened before, it'll happen again.

[–] masquenox@lemmy.world 9 points 8 months ago

There was any trust in (so-called) "AI" to begin with?

That's news to me.

[–] daddy32@lemmy.world 7 points 8 months ago (2 children)

I don't get all the negativity on this topic and especially comparing current AI (the LLMs) to the nonsense of NFTs etc. Of course, one would have to be extremely foolish/naive or a stakeholder to trust the AI vendors. But the technology itself is, while not solid, genuinely useful in many many use cases. It is an absolute positive productivity booster in these and enables use cases that were not possible or practical before. The one I have the most experience with is programming and programming-related stuff such as software architecture where the LLMs absolutely shine, but there are others. The current generation can even self-correct without human intervention. In any case, even if this would be the only use case ever, this would absolutely change the world and bring positive boosts in productivity across all industries - unlike NFTs.

[–] hex_m_hell@slrpnk.net 15 points 8 months ago

People who understand technology know that most of the tremendous benefits of AI will never be possible to realize within the griftocarcy of capitalism. Those who don't understand technology can't understand the benefits because the grifters have confused them, and now they think AI is useless garbage because the promise doesn't meet the reality.

In the first case it's exactly like cryptography, where we were promised privacy and instead we got DRM and NFTs. In the second, it's exactly like NFTs because people were promised something really valuable and they just got robbed instead.

Management will regularly pass over the actual useful AI idea because it's really hard to explain while funding the complete garbage "put AI on it" idea that doesn't actually help anyone. They do this because management is almost universally not technically competent. So the technically competent workers who absolutely know the potential benefits are still not able to leverage them because management either doesn't understand or is actively engaging in a grift.

load more comments (1 replies)
load more comments
view more: next ›