this post was submitted on 22 Dec 2024
357 points (95.7% liked)

Technology

60052 readers
3601 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] eleitl@lemm.ee 4 points 10 hours ago (1 children)

Page doesn't render properly.

[–] Joker@sh.itjust.works 4 points 10 hours ago (1 children)
[–] eleitl@lemm.ee 1 points 8 hours ago

Thanks -- it has been clear enough that an another AI winter is coming. Likely latest when the Global Financial Crisis 2 is here.

[–] LenielJerron@lemmy.world 108 points 19 hours ago* (last edited 19 hours ago) (2 children)

A big issue that a lot of these tech companies seem to have is that they don't understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

I do not understand all the hype around AI. I can understand the danger; people who don't see that it's bad are using it in place of people who know how to do things. But in my teaching for example I've never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can't imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it's actually impressive technology - we've had computers that can do advanced math highly accurately for a while, but we've finally developed one that's worse at math than the average undergrad in a gen-ed class!)

[–] Brodysseus@lemmy.dbzer0.com 6 points 11 hours ago

I've ran some college hw through 4o just to see and it's remarkably good at generating proofs for math and algorithms. Sometimes it's not quite right but usually on the right track to get started.

In some of the busier classes I'm almost certain students do this because my hw grades would be lower than the mean and my exam grades would be well above the mean.

[–] Voroxpete@sh.itjust.works 42 points 17 hours ago (2 children)

The answer is that it's all about "growth". The fetishization of shareholders has reached its logical conclusion, and now the only value companies have is in growth. Not profit, not stability, not a reliable customer base or a product people will want. The only thing that matters is if you can make your share price increase faster than the interest on a bond (which is pretty high right now).

To make share price go up like that, you have to do one of two things; show that you're bringing in new customers, or show that you can make your existing customers pay more.

For the big tech companies, there are no new customers left. The whole planet is online. Everyone who wants to use their services is using their services. So they have to find new things to sell instead.

And that's what "AI" looked like it was going to be. LLMs burst onto the scene promising to replace entire industries, entire workforces. Huge new opportunities for growth. Lacking anything else, big tech went in HARD on this, throwing untold billions at partnerships, acquisitions, and infrastructure.

And now they have to show investors that it was worth it. Which means they have to produce metrics that show people are paying for, or might pay for, AI flavoured products. That's why they're shoving it into everything they can. If they put AI in notepad then they can claim that every time you open notepad you're "engaging" with one of their AI products. If they put Recall on your PC, every Windows user becomes an AI user. Google can now claim that every search is an AI interaction because of the bad summary that no one reads. The point is to show "engagement", "interest", which they can then use to promise that down the line huge piles of money will fall out of this pinata.

The hype is all artificial. They need to hype these products so that people will pay attention to them, because they need to keep pretending that their massive investments got them in on the ground floor of a trillion dollar industry, and weren't just them setting huge piles of money on fire.

[–] MagicShel@lemmy.zip 6 points 15 hours ago* (last edited 15 hours ago) (10 children)

I know I'm an enthusiast, but can I just say I'm excited about NotebookLLM? I think it will be great for documenting application development. Having a shared notebook that knows the environment and configuration and architecture and standards for an application and can answer specific questions about it could be really useful.

"AI Notepad" is really underselling it. I'm trying to load up massive Markdown documents to feed into NotebookLLM to try it out. I don't know if it'll work as well as I'm hoping because it takes time to put together enough information to be worthwhile in a format the AI can easily digest. But I'm hopeful.

That's not to take away from your point: the average person probably has little use for this, and wouldn't want to put in the effort to make it worthwhile. But spending way too much time obsessing about nerd things is my calling.

[–] Voroxpete@sh.itjust.works 11 points 13 hours ago (1 children)

From a nerdy perspective, LLMs are actually very cool. The problem is that they're grotesquely inefficient. That means that, practically speaking, whatever cool use you come up with for them has to work in one of two ways; either a user runs it themselves, typically very slowly or on a pretty powerful computer, or it runs as a cloud service, in which case that cloud service has to figure out how to be profitable.

Right now we're not being exposed to the true cost of these models. Everyone is in the "give it out cheap / free to get people hooked" stage. Once the bill comes due, very few of these projects will be cool enough to justify their costs.

Like, would you pay $50/month for NotebookLM? However good it is, I'm guessing it's probably not that good. Maybe it is. Maybe that's a reasonable price to you. It's probably not a reasonable price to enough people to sustain serious development on it.

That's the problem. LLMs are cool, but mostly in a "Hey this is kind of neat" way. They do things that are useful, but not essential, but they do so at an operating cost that only works for things that are essential. You can't run them on fun money, but you can't make a convincing case for selling them at serious money.

[–] MagicShel@lemmy.zip 7 points 12 hours ago

Totally agree. It comes down to how often is this thing efficient for me if I pay the true cost. At work, yes it would save over $50/mo if it works well. At home it would be difficult to justify that cost, but I'd also use it less so the cost could be lower. I currently pay $50/mo between ChatGPT and NovelAI (and the latter doen't operate at a loss) so it's worth a bit to me just to nerd out over it. It certainly doesn't save me money except in the sense that it's time and money I don't spend on some other endeavor.

My old video card is painfully slow for local LLM, but I dream of spending for a big card that runs closer to cloud speeds even if the quality is lower, for easier tasks.

load more comments (9 replies)
load more comments (1 replies)
[–] einlander@lemmy.world 27 points 20 hours ago (1 children)
[–] SlopppyEngineer@lemmy.world 7 points 19 hours ago (4 children)

The article does mention that when the AI bubble is going down, the big players will use the defunct AI infrastructure and add it to their cloud business to get more of the market that way and, in the end, make the line go up.

[–] Voroxpete@sh.itjust.works 8 points 16 hours ago* (last edited 13 hours ago)

That's not what the article says.

They're arguing that AI hype is being used as a way of driving customers towards cloud infrastructure over on-prem. Once a company makes that choice, it's very hard to get them to go back.

They're not saying that AI infrastructure specifically can be repurposed, just that in general these companies will get some extra cloud business out of the situation.

AI infrastructure is highly specialized, and much like ASICs for the blockchain nonsense, will be somewhere between "very hard" and "impossible" to repurpose.

load more comments (3 replies)
[–] walter_wiggles@lemmy.nz 8 points 19 hours ago

Big tech is out of ideas and needs AI to work in order to drive growth.

[–] Zos_Kia@lemmynsfw.com 5 points 17 hours ago (1 children)

To have a bubble you need companies with no clear path to monetization, being over-valued to an extreme degree. This leaves me wondering : what company specifically ? Are they talking about nVidia ? OpenAI ? MidJourney ? Or the slew of LLM-powered SaaS products that have started appearing ? How exactly are we defining "over-valuation" here ? Are we talking about the tech industry as a whole ?

We often invite the comparison to the DotCom bubble but that's apples to oranges. You had companies making social networks for dogs or similar bullshit, valued in the billions and getting a ticker at the stock market before making a single dime. Or companies with outlandish promises such as delivering to any home in the US, in <1 hour, for a low price, and building warehouses by the hundreds before having a storefront. What would be the 2024 equivalent ? If a bubble is about to deflate then there should be dozens of comparable examples.

[–] UraniumBlazer@lemm.ee 3 points 17 hours ago (2 children)

Exactly. There's a very clear path to monetisation for the bigger tech companies (ofc, not the random startup that screams "AI quantum computing blockchain reeeee").

Lemmy is just incredibly biased against AI, as it could replace a shit ton of jobs and lead to a crazy amount of wealth inequality. However, people need to remember that the problem isn't the tech- it's the system that the tech is being innovated in.

Denying AI is just going to make this issue a lot worse. We need to work to make AI be beneficial for all of us instead of the capitalists. But somehow leftist talk surrounding AI has just been about hating on it/ denying it, instead of preparing for a world in which it would be critical infrastructure very soon.

[–] sudneo@lemm.ee 6 points 14 hours ago

What job could possibly replace...? If you can replace a job with LLMs it means either that the job is not needed on the first place (bullshit job) or that you can replace it with a dice (e.g., decision-making processes), since LLMs-output will depend essentially just on what is in the training material -which we don't know (I.e., the answer is essentially random).

[–] Zos_Kia@lemmynsfw.com 5 points 16 hours ago (1 children)

I don't think it's just Lemmy, i had similar conversations on Reddit. People don't realize that the companies they claim are over-valued actually have very strong business fundamentals. That's why in articles like OP's they will never mention any names or figures. I guess it's very convincing for outsiders but it doesn't stand any amount of scrutiny.

If you take OpenAI for example, they went from 0 to 3.6B$ annual revenue in just two fucking years. How is that not worth a boatload of money ? Even Uber didn't have that kind of growth and they burned a LOT more cash than OpenAI is burning right now.

As for the “AI quantum computing blockchain reeeee” projects... well they have a very hard time raising money right now and when they do, it's at pretty modest valuations. The market is not as dumb as it is portrayed.

[–] Voroxpete@sh.itjust.works 8 points 13 hours ago* (last edited 13 hours ago) (3 children)

How is that not worth a boatload of money?

Because they spend $2.35 billion in operating costs for every $1 billion in revenue (gross, not net).

OpenAI loses money at an absolutely staggering rate, and every indication, even their own openly stated predictions, are that those costs will only increase.

And keep in mind, right now OpenAI gets a lot of their investment in the form of compute credits from Microsoft, which they get to spend at a massively discounted rate. That means that if they were actually buying their Azure time at market value they'd be closer to spending something like $5bn to make $1bn.

Again, I really need to be clear here, I'm not saying "to make 1 billion in profit." I'm saying "revenue". They lose money every time someone uses their services. The more users they have, the more their losses grow. Even paid users cost them more money than they pay in most cases.

This is like a store that buys products at $10 and sells them at $4. It is the most insanely unprofitable business plan imaginable.

And it's not getting better. Conversions to paid plans are at about 3%. Their enterprise sales are abysmal. Training costs are increasing exponentially with each new generation of models. Attempts to make their models more compute efficient have so far failed utterly.

OpenAI's path to profitability is basically "Invent true AGI." It's a wild fantasy with zero basis in reality that investors are shovelling money into because investors will shovel money into anything that promises infinite growth.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›