this post was submitted on 28 Oct 2024
1536 points (98.7% liked)

Technology

60108 readers
2050 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] brucethemoose@lemmy.world 330 points 1 month ago* (last edited 1 month ago) (47 children)

As a fervent AI enthusiast, I disagree.

...I'd say it's 97% hype and marketing.

It's crazy how much fud is flying around, and legitimately buries good open research. It's also crazy what these giant corporations are explicitly saying what they're going to do, and that anyone buys it. TSMC's allegedly calling Sam Altman a 'podcast bro' is spot on, and I'd add "manipulative vampire" to that.

Talk to any long-time resident of localllama and similar "local" AI communities who actually dig into this stuff, and you'll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

[–] falkerie71@sh.itjust.works 100 points 1 month ago (3 children)

For real. Being a software engineer with basic knowledge in ML, I'm just sick of companies from every industry being so desperate to cling onto the hype train they're willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.

[–] Mikelius@lemmy.world 32 points 1 month ago (1 children)

For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.

[–] dan@upvote.au 21 points 1 month ago* (last edited 1 month ago) (6 children)

They're all built on top of OpenAI which is very unprofitable at the moment. Feels like the whole industry is built on a shaky foundation.

Putting the entire fate of your company in a different company (OpenAI) is not a great business move. I guess the successful AI startups will eventually transition to self-hosted models like Llama, if they survive that long.

load more comments (6 replies)
load more comments (2 replies)
[–] Blackmist@feddit.uk 29 points 1 month ago (1 children)

TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that's their opinion, I feel people should listen...

There's little in the AI business plan other than hurling money at it and hoping job losses ensue.

load more comments (1 replies)
[–] conciselyverbose@sh.itjust.works 19 points 1 month ago

Seriously, I'd love to be enthusiastic about it because it's genuinely cool what you can do with math.

But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it's pretty much impossible.

And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.

[–] WoodScientist@lemmy.world 17 points 1 month ago (1 children)

I think we should indict Sam Altman on two sets of charges:

  1. A set of securities fraud charges.

  2. 8 billion counts of criminal reckless endangerment.

He's out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there's a good chance that they won't be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he's telling the truth, he's endangering us all. If he's lying, then he's committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

load more comments (1 replies)
load more comments (43 replies)
[–] NABDad@lemmy.world 105 points 1 month ago (4 children)

I had a professor in college that said when an AI problem is solved, it is no longer AI.

Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they're just tools we use without thinking about them.

I'm sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it's just the keyboard app on my phone.

load more comments (4 replies)
[–] peopleproblems@lemmy.world 76 points 1 month ago (3 children)

Yup.

I don't know why. The people marketing it have absolutely no understanding of what they're selling.

Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I'm not the one developing the AI that they're wasting money on, they just demanded I use it.

That's true software engineering folks. Decoupling doesn't just make it easier to program and reuse, it saves your job when you need to retire something later too.

[–] jagged_circle@feddit.nl 37 points 1 month ago (2 children)

Their goal isn't to make AI.

The goal of both the VCs and the startups is to make money. That's why.

load more comments (2 replies)
[–] Revan343@lemmy.ca 24 points 1 month ago

The people marketing it have absolutely no understanding of what they're selling.

Has it ever been any different? Like, I'm not in tech, I build signs for a living, and the people selling our signs have no idea what they're selling.

load more comments (1 replies)
[–] NeilBru@lemmy.world 75 points 1 month ago* (last edited 1 month ago)

I make DNNs (deep neural networks), the current trend in artificial intelligence modeling, for a living.

Much of my ancillary work consists of deflating/tempering the C-suite's hype and expectations of what "AI" solutions can solve or completely automate.

DNN algorithms can be powerful tools and muses in scientific endeavors, engineering, creativity and innovation. They aren't full replacements for the power of the human mind.

I can safely say that many, if not most, of my peers in DNN programming and data science are humble in our approach to developing these systems for deployment.

If anything, studying this field has given me an even more profound respect for the billions of years of evolution required to display the power and subtleties of intelligence as we narrowly understand it in an anthropological, neuro-scientific, and/or historical framework(s).

[–] narc0tic_bird@lemm.ee 75 points 1 month ago (2 children)

Sounds about right. There are some valid and good use cases for "AI", but the majority is just buzzword marketing.

[–] billwashere@lemmy.world 55 points 1 month ago (1 children)

I have lots of uses for Attack Insects….

load more comments (1 replies)
load more comments (1 replies)
[–] ipkpjersi@lemmy.ml 64 points 1 month ago (1 children)

That's about right. I've been using LLMs to automate a lot of cruft work from my dev job daily, it's like having a knowledgeable intern who sometimes impresses you with their knowledge but need a lot of guidance.

[–] eldavi@lemmy.ml 31 points 1 month ago* (last edited 1 month ago) (6 children)

watch out; i learned the hard way in an interview that i do this so much that i can no longer create terraform & ansible playbooks from scratch.

even a basic api call from scratch was difficult to remember and i'm sure i looked like a hack to them since they treated me as such.

[–] orgrinrt@lemmy.world 19 points 1 month ago

In addition, there have been these studies released (not so sure how well established, so take this with a grain of salt) lately, indicating a correlation with increased perceived efficiency/productivity, but also a strongly linked decrease in actual efficiency/productivity, when using LLMs for dev work.

After some initial excitement, I’ve dialed back using them to zero, and my contributions have been on the increase. I think it just feels good to spitball, which translates to heightened sense of excitement while working. But it’s really just much faster and convenient to do the boring stuff with snippets and templates etc, if not as exciting. We’ve been doing pair programming lately with humans, and while that’s slower and less efficient too, seems to contribute towards rise in quality and less problems in code review later, while also providing the spitballing side. In a much better format, I think, too, though I guess that’s subjective.

load more comments (5 replies)
[–] Rolder@reddthat.com 47 points 1 month ago (4 children)

AI as we know it does have its uses, but I would definitely agree that 90% of it is just marketing hype

load more comments (4 replies)
[–] MystikIncarnate@lemmy.ca 43 points 1 month ago (7 children)

I think when the hype dies down in a few years, we'll settle into a couple of useful applications for ML/AI, and a lot will be just thrown out.

I have no idea what will be kept and what will be tossed but I'm betting there will be more tossed than kept.

load more comments (7 replies)
[–] TheImpressiveX@lemmy.ml 37 points 1 month ago (11 children)

What happened to Linus? He looks so old now...

[–] Bassman1805@lemmy.world 162 points 1 month ago (10 children)
[–] Telorand@reddthat.com 16 points 1 month ago (1 children)

Not especially old, though; he looks like a 54yo dev. Reminds me of my uncles when they were 54yo devs.

[–] lechatron@lemmy.today 20 points 1 month ago

As a 46 year old dev I'm starting to look that way too.

load more comments (9 replies)
[–] will_a113@lemmy.ml 32 points 1 month ago (1 children)
load more comments (1 replies)
[–] henfredemars@infosec.pub 24 points 1 month ago

What happened to he is happening now to you.

load more comments (8 replies)
[–] atk007@lemmy.world 30 points 1 month ago* (last edited 1 month ago) (1 children)

I am thinking of deploying a RAG system to ingest all of Linus's emails, commit messages and pull request comments, and we will have a Linus chatbot.

load more comments (1 replies)
[–] cupcakezealot@lemmy.blahaj.zone 29 points 1 month ago (1 children)
load more comments (1 replies)
[–] HawlSera@lemm.ee 29 points 1 month ago* (last edited 1 month ago) (2 children)

The only time I've seen AI work well are for things like game development, mainly the upscaling of textures and filling in missing frames of older games so they can run at higher frames without being choppy. Maybe even have applications for getting more voice acting done... If the SAG and Silicon Valley can find an arrangement for that that works out well for both parties..

If not for that I'd say 10% reality was being.... incredibly favorable to the tech bros

load more comments (2 replies)
[–] zxqwas@lemmy.world 23 points 1 month ago (3 children)

Like with any new technology. Remember the blockchain hype a few years back? Give it a few years and we will have a handful of areas where it makes sense and the rest of the hype will die off.

Everyone sane probably realizes this. No one knows for sure exactly where it will succeed so a lot of money and time is being spent on a 10% chance for a huge payout in case they guessed right.

[–] HawlSera@lemm.ee 21 points 1 month ago (2 children)

There's an area where blockchain makes sense!?!

load more comments (2 replies)
load more comments (2 replies)
[–] Chessmasterrex@lemmy.world 23 points 1 month ago (4 children)

I play around with the paid version of chatgpt and I still don't have any practical use for it. it's just a toy at this point.

[–] Buddahriffic@lemmy.world 18 points 1 month ago

I used chatGPT to help make looking up some syntax on a niche scripting language over the weekend to speed up the time I spent working so I could get back to the weekend.

Then, yesterday, I spent time talking to a colleague who was familiar with the language to find the real syntax because chatGPT just made shit up and doesn't seem to have been accurate about any of the details I asked about.

Though it did help me realize that this whole time when I thought I was frying things, I was often actually steaming them, so I guess it balances out a bit?

load more comments (3 replies)
[–] Zip2@feddit.uk 22 points 1 month ago* (last edited 1 month ago) (3 children)

Oh please. Wait until they release double-sided, double-density 128bit AI quantum blockchain that runs on premises/in the cloud edge hybrid.

load more comments (2 replies)
[–] pHr34kY@lemmy.world 21 points 1 month ago (3 children)

I'm waiting for the part that it gets used for things that are not lazy, manipulative and dishonest. Until then, I'm sitting it out like Linus.

load more comments (3 replies)
[–] Buttflapper@lemmy.world 20 points 1 month ago (2 children)

Copilot by Microsoft is completely and utterly shit but they're already putting it into new PCs. Why?

load more comments (2 replies)
[–] explodicle@sh.itjust.works 20 points 1 month ago (3 children)

Just chiming in as another guy who works in AI who agrees with this assessment.

But it's a little bit worrisome that we all seem to think we're in the 10%.

load more comments (3 replies)
[–] FlyingSquid@lemmy.world 19 points 1 month ago (2 children)

Linus is known for his generosity.

[–] SynopsisTantilize@lemm.ee 23 points 1 month ago (2 children)
load more comments (2 replies)
load more comments (1 replies)
load more comments
view more: next ›