this post was submitted on 08 Jun 2025
275 points (100.0% liked)

Fuck AI

3133 readers
550 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 86 points 1 week ago (2 children)

You should probably mention that this is an article from 7 months ago.

[–] [email protected] 23 points 1 week ago (3 children)
[–] [email protected] 34 points 1 week ago

No, they already stole everything, so there’s nothing left they can use to train and improve further.

[–] [email protected] 27 points 1 week ago (5 children)
[–] [email protected] 33 points 1 week ago (1 children)

[Citation needed]

If anything the LLMs have gotten less useful and started hallucinating even more obviously now.

[–] [email protected] 10 points 1 week ago

7 months ago: https://web.archive.org/web/20241210232635/https://openlm.ai/chatbot-arena/ Now: https://web.archive.org/web/20250602092229/https://openlm.ai/chatbot-arena/

You can see that o1-mini, a silver (almost gold) model, is now a middle-of-the-road copper model.

Note that Chatbot Arena calculates its score relatively - they'll show two outputs (without the model names), and people select the output they prefer. The preferences are ordered. Not sure what accounts for gold/silver/copper.

[–] [email protected] 12 points 1 week ago
[–] [email protected] 9 points 1 week ago (6 children)

Yes. 7 months ago there weren't any reasoning models. The video models were far worse. Coding was nothing compared to capabilities they have now.

Ai has come far fast from the time this article was written.

[–] [email protected] 18 points 1 week ago (2 children)

Testing shows that current models hallucinate more than previous ones. OpenAI rebeadged ChatGPT 5 to 4.5 because the gains were so meagre that they couldn't get away with pretending it was a serious leap forward. "Reasoning" sucks; the model just leaps to a conclusion as usual then makes up steps that sound like they lead to that conclusion; in many cases the steps and the conclusion don't match, and because the effect is achieved by running the model multiple times the cost is astronomical. So far just about every negative prediction in this article has come true, and every "hope for the future" has fizzled utterly.

Are there minor improvements in some areas? Yeah, sure. But you have to keep in mind the big picture that this article is painting; the economics of LLMs do not work if you're getting incremental improvements at exponential costs. It was supposed to be the exact opposite; LLMs were pitched to investors as a "hyperscaling" technology that was going to rapidly accelerate in utility and capability until it hit escape velocity and became true AGI. Everything was supposed to get more, not less, efficient.

The current state of AI is not cost effective. Microsoft (just to pick on one example) is making somewhere in the region of a few tens of millions a year off of copilot (revenue, not profit), on an investment of tens of billions a year. That simply does not work. The only way for that to work is not only for the rate of progress to be accelerating, but for the rate of accelleration to be accelerating. We're nowhere near close to that.

The crash is coming, not because LLMs cannot ever be improved, but because it's becoming increasingly clear that there is no avenue for LLMs to be efficiently improved.

[–] [email protected] 6 points 1 week ago* (last edited 1 week ago) (2 children)

DeepSeek showed there is potential in abandoning the AGI pathway (which is impossible with LLMs) and instead training lots and lots of different specialized models that can be switched between for different tasks (at least, that's how I understand it)

So I'm not going to assume LLMs will hit a wall, but it's going to require something else paradigm shifting that we just aren't seeing out of the current crop of developers.

[–] [email protected] 7 points 1 week ago (2 children)

Yes, but the basic problem doesn't change; you're spending billions to make millions. And Deepseek's approach only works because they're able to essentially distill the output of less efficient models like Llama and GPT. So they haven't actually solved the underlying technical issues, they've just found a way to break into the industry as a smaller player.

At the end of the day, the problem is not that you can't ever make something useful with transformer models; it's that you cannot make that useful thing in a way that is cost effective. That's especially a problem if you expect big companies like Microsoft or OpenAI to continue to offer these services at an affordable price. Yes, Copilot can help you code, but that's worth Jack shit if the only way for Microsoft to recoup their investment is by charging $200 a month for it.

load more comments (2 replies)
[–] [email protected] 6 points 1 week ago

That was pretty much always the only potential path forward for LLM type AIs. It's an extension of the same machine learning technology we've been building up since the 50s.

Everyone trying to approximate an AGI with it has been wasting their time and money.

[–] [email protected] 2 points 1 week ago (2 children)

Amazon did not turn a profit for 14 years. That's not a sign of a crash.

Ai is progressing and different routes are being tried. Some might not work as good as others. We are on a very fast train. I think the crash is unlikely. The prize is too valuable and it's strategically impossible to leave it to someone else.

[–] [email protected] 12 points 1 week ago (6 children)

Amazon isn't a good comparison. People need to buy things. Having a better way to do that was and is worth billions.

There is no revolutionary product that people need on the horizon for AI. The products released using it are mostly just fun toys, because it can't be trusted with anything serious. There's no indication this will change in the near to distant future

load more comments (6 replies)
[–] [email protected] 6 points 1 week ago (5 children)

Assuming it cost Microsoft $0 dollars to provide their AI services (this is up there with "Assuming all of physics stops working), and every dollar they make from Copilot was pure profit, it would take Microsoft 384 years to recoup one year of investment in AI.

And thats without even getting into the fact that in reality these services are so expensive to run that every time a customer uses them its a net loss to the provider.

When Amazon started out, no one had heard of them. Everyone has heard of Microsoft. Everyone already uses Microsoft's products. Everyone has heard about AI. It's the only thing in tech that anyone is talking about. It's hard to see how they could be doing more to market this. Same story with OpenAI, Facebook, Google, basically every player in this space.

Even if they can solve the efficiency problems to the point where they can actually make a profit off of these things, there just isn't enough interest. AI does plenty of things that are useful, but nothing that's truly vital, and it needs to be vital to have any hope of making back the money that's gone into it.

At present, there simply is not a path to profitability that doesn't rely on unicorn farts and pixie dust.

load more comments (5 replies)
load more comments (5 replies)
[–] [email protected] 8 points 1 week ago
[–] [email protected] 3 points 1 week ago

nope, AI already kinda peaked what it can do currently.

[–] [email protected] 13 points 1 week ago

3 years ago Sam Altman said the current models hit a wall and the media blocked it out

[–] [email protected] 63 points 1 week ago* (last edited 1 week ago) (2 children)

Don’t get my hopes up. I want them to lose as much of their dumb tech bro money as possible.

[–] [email protected] 9 points 1 week ago (2 children)

PSA:

Loose is the opposite of tight.

Lose is the opposite of win.

[–] [email protected] 3 points 1 week ago

Also lefty-loosy, righty-tighty, or if you want to translate from the Spanish expression "la derecha oprime y la izquierda libera" ("the right oppresses and the left liberates") that's sage advice too.

[–] [email protected] 3 points 1 week ago

Lowes is a store

[–] [email protected] 2 points 1 week ago (1 children)

The problem is it will hurt everyone when they fail

[–] [email protected] 7 points 1 week ago (2 children)

Anyone relying on this shit deserves it. Let these venture capitalists throwing money at Ai all burn.

[–] [email protected] 2 points 1 week ago

What I am saying is the investments are at a scale that it could cause a resession when these companies fail. Meaning its likely to effect everyone in the economy we all work in. You wont need to be working on AI to feel the impact

load more comments (1 replies)
[–] [email protected] 29 points 1 week ago
[–] [email protected] 26 points 1 week ago* (last edited 1 week ago) (2 children)

I have a theory about this.

The people who had both genius and work ethic made magic. That was the GPT, Dall-E, AlphaGo generation of AI. You can't make magic because you want to have a good career and you did some seminars. You have to do it because you have it burning inside you, and you're in a community of people who all have that type of vision, and all pulling in the same direction you can make things that are impossible before you did them.

(Not that I'm saying AI itself is necessarily a good thing, certainly not in its present trajectory. I'm just saying that getting the tech from recognizing digits to ChatGPT was pretty fucking impressive.)

Now probably about 90% of the people in the field are there because it's a good career. And, the people giving them their marching orders day-to-day are greedy idiots. You just can't make magic that way. All you can do is follow the road that's been laid down. The industry is just throwing orders of magnitude more electricity, money, and engineer time at these still-impossible remaining problems and hoping that'll get them suddenly solved.

Remember when ChatGPT was programmed by competent and humble people (who are now, a lot of them, fired because they fought with Sam Altman), and so it kept emphasizing to people that it was not an AI (meaning an AGI), just a language model? They felt like that was important for people to understand (not that it did any good). Anyway, those days are gone, and with them, the forward progress that people like that can make.

Why? Because we're trying to make magic with career people. It doesn't work that way, never has. It's like trying to start a fire with a bunch of rocks. Rocks are fine. Fire is fine, if a little bit dangerous. But they are not interchangeable.

[–] [email protected] 24 points 1 week ago (2 children)

GPT, DALL-E and AlphaGo are not the type of magic born from passion. They are the type of magic born by years of researchers doing the mostly boring work of science. Those are career people. They are just career researchers.

The current public AI scene is what happens when commercial interests take over. They can push the current state of the art to its limit, but aren't going to make any fundamental breakthroughs.

[–] [email protected] 13 points 1 week ago (3 children)

Those are career people. They are just career researchers.

People mostly only accept the extremely shitty working conditions of the research industry if they have at least one of a lot of passion, extreme egomania, or independent wealth. Preferably more than one. Most of the people doing a research career do at least start out with a lot of passion.

load more comments (3 replies)
[–] [email protected] 5 points 1 week ago (1 children)

Exactly, they are a breakthrough built on top of decades of steadily paced progress. Those decades are conveniently ignored by the commercial interests, who like to act as if those once in a decade breakthroughs are actually the normal pace of research at their company, and the next one is right around the corner.

[–] [email protected] 3 points 1 week ago (1 children)

Ironically, that's exactly why all the greedy cunt executives think general AI is right around the corner... They haven't been paying attention and have no clue about the decades of research and development that got it this far.

[–] [email protected] 3 points 1 week ago

Nor do they remember the previous AI boom of the 90's and 2000's, where the likes of Lernout & Hauspie were also promising the world. In that case they went bankrupt and executives were convicted of fraud, because they resorted to "creative" accounting to paper over solvency issues.

[–] [email protected] 6 points 1 week ago

Career people can easily make magic. Lots and lots of career people make magic.

The problem is the greedy fucks. When choice X appears to make 10x the money as creative, difficult decision Y, which choice is made every time when it's the greedy fuck choosing?

[–] [email protected] 16 points 1 week ago (2 children)

the only people ever obsessed with AI, were corporate heads looking to reduce headcount in thier companies, and to suck up more VC money.

load more comments (2 replies)
[–] [email protected] 15 points 1 week ago

Yes of course they are at the limit, and because they poisoned the internet with generative bullshit, they can't scrape it and expect improvement, but they are still scraping it, so they're poisoning themselves.

The end of the article has classic snake oil trash. The idea that newer AI could be trained to think similar to how humans think. Yes, great, you know scientists have been working on that for decades. Good luck succeeding where nobody else did. There's a reason that so-called weak AI or so-called expert systems are the ones that we all remember as having lasted for decades.

[–] [email protected] 14 points 1 week ago

It's worse than that... They are broken. Like, they are all fucking broken.

[–] [email protected] 11 points 1 week ago (1 children)

Shh. Let it happen. Let the poison take hold.

[–] [email protected] 24 points 1 week ago

I don't think it's just the poison, but an inherent limitation on the technology. An LLM is never going to be able to have critical thinking skills.

[–] [email protected] 10 points 1 week ago (1 children)

Quality over quantity will be the next generation of AI models

[–] [email protected] 4 points 1 week ago (1 children)

Figuring out more efficient models would be a big boost as well.

Lots of improvements over the years, and I'm sure there's a lot more that can be done.

[–] [email protected] 2 points 1 week ago
[–] [email protected] 8 points 1 week ago (1 children)

We're going to need a new BS tech meme for arsehole investors to speculate in. Whats next? I'm guessing something medical. Personalized health care perhaps.

[–] [email protected] 6 points 1 week ago

I think you're right.

We're due for something DNA based, since they can all grab a cheap copy of the 23andMe data set, now.

[–] [email protected] 7 points 1 week ago (1 children)

I got into AI in 2023 for a few months just to see if I could make sense of it all.

The whole thing as an industry is almost entirely smoke and mirrors meant to confuse, obscuring theft and fraud.

It does have some neat applications and opportunities, (generating templates; storyboarding) but warrants a small R&D team of enthusiasts, not the collective investment and resources of entire nations.

[–] [email protected] 3 points 1 week ago

what you’re forgetting is: it makes all video blackmail tapes useless… or it’s getting closer to that…
at any rate, that’s the only way i can rationalize the amount of money going into it…
btw, did you know that when the FBI raided Epstein’s island, they forgot to get a warrant for his safe, which was full of hard drives and videos… after they got the warrant, the safe had been emptied….
from a secured fbi crime scene on an island…
i guess nobody talks about it because there’s not much else to the story….
well, except, how was the safe even excluded? if they’re searching the whole property, wouldn’t the safe be included?

load more comments
view more: next ›