this post was submitted on 26 Feb 2025
975 points (100.0% liked)

Technology

67338 readers
3756 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."

Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 273 points 3 weeks ago (4 children)

Correction, LLMs being used to automate shit doesn't generate any value. The underlying AI technology is generating tons of value.

AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.

[–] [email protected] 64 points 3 weeks ago (1 children)

Well sure, but you're forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.

[–] [email protected] 9 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

How is that a qualification on anything they said? If our knowledge of protein folding has gone up by multiples, then it has gone up by multiples, regardless of whatever funding shenanigans Trump is pulling or what effects those might eventually have. None of that detracts from the value that has already been delivered, so I don’t see how they are “forgetting” anything. At best, it’s a circumstance that may play in economically but doesn’t say anything about AI’s intrinsic value.

load more comments (1 replies)
[–] [email protected] 23 points 3 weeks ago

Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.

[–] [email protected] 21 points 3 weeks ago (5 children)

Thanks. So the underlying architecture that powers LLMs has application in things besides language generation like protein folding and DNA sequencing.

[–] [email protected] 50 points 3 weeks ago (2 children)

alphafold is not an LLM, so no, not really

[–] [email protected] 27 points 3 weeks ago (2 children)

You are correct that AlphaFold is not an LLM, but they are both possible because of the same breakthrough in deep learning, the transformer and so do share similar architecture components.

load more comments (2 replies)
[–] [email protected] 8 points 3 weeks ago (1 children)

A Large Language Model is a translator basically, all it did was bridge the gap between us speaking normally and a computer understanding what we are saying.

The actual decisions all these "AI" programs do are Machine Learning algorithms, and these algorithms have not fundamentally changed since we created them and started tweaking them in the 90s.

AI is basically a marketing term that companies jumped on to generate hype because they made it so the ML programs could talk to you, but they're not actually intelligent in the same sense people are, at least by the definitions set by computer scientists.

[–] [email protected] 16 points 3 weeks ago (5 children)

What algorithm are you referring to?

The fundamental idea to use matrix multiplication plus a non linear function, the idea of deep learning i.e. back propagating derivatives and the idea of gradient descent in general, may not have changed but the actual algorithms sure have.

For example, the transformer architecture (that is utilized by most modern models) based on multi headed self attention, optimizers like adamw, the whole idea of diffusion for image generation are I would say quite disruptive.

Another point is that generative ai was always belittled in the research community, until like 2015 (subjective feeling would need meta study to confirm). The focus was mostly on classification something not much talked about today in comparison.

load more comments (5 replies)
[–] [email protected] 15 points 3 weeks ago

Image recognition models are also useful for astronomy. The largest black hole jet was discovered recently, and it was done, in part, by using an AI model to sift through vast amounts of data.

https://www.youtube.com/watch?v=wC1lssgsEGY

This thing is so big, it travels between voids in the filaments of galactic super clusters and hits the next one over.

[–] [email protected] 10 points 3 weeks ago

It's always important to double check the work of AI, but yea it excels at solving problems we've been using brute force on

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 67 points 3 weeks ago (1 children)

It is fun to generate some stupid images a few times, but you can't trust that "AI" crap with anything serious.

[–] [email protected] 47 points 3 weeks ago (1 children)

I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.

For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.

On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.

So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.

[–] [email protected] 17 points 3 weeks ago (2 children)

Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.

[–] [email protected] 14 points 3 weeks ago (1 children)

While that’s true, the thing that stuck out to me is not even that the AI was mislead by itself finding AI slop, or even somebody falsely asserting something. I googled something with a particular yea or no answer. “Does X technology use Y protocol”. The AI came back with “Yes it does, and here’s how it uses it”, and upon visiting the reference page for that answer, it was documentation for that technology where it explained very clearly that x technology does NOT use Y protocol, and then went into detail on why it doesn’t. So even when everything lines up and the answer is clear and unambiguous, the AI can give you an entirely fabricated answer.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 58 points 3 weeks ago (7 children)

I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch

[–] [email protected] 16 points 3 weeks ago* (last edited 3 weeks ago)

For coding it's also useful for doing the menial grunt work that's easy but just takes time.

You're not going to replace a senior dev with it, of course, but it's a great tool.

My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

[–] [email protected] 9 points 3 weeks ago (8 children)

So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

Just that you call an LLM "AI" shows how unqualified you are to comment on the "successes".

[–] [email protected] 23 points 3 weeks ago (6 children)

Not this again... LLM is a subset of ML which is a subset of AI.

AI is very very broad and all of ML fits into it.

[–] [email protected] 10 points 3 weeks ago

This is the issue with current public discourse though. AI has become shorthand for the current GenAI hypecycle, meaning for many AI has become a subset of ML.

load more comments (5 replies)
load more comments (7 replies)
load more comments (5 replies)
[–] [email protected] 42 points 3 weeks ago (1 children)

LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren't immediately obvious.

[–] [email protected] 26 points 3 weeks ago (1 children)

I think AI is generating negative value ... the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn't given anything other than, as you've said, plain-language search (with bonus hallucination bullshit, yay!) ... snake oil, indeed.

[–] [email protected] 13 points 3 weeks ago (1 children)

Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.

I believe in AI for sure in some fields, but I understand the skeptics around LLMs.

But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.

My point is, its not that black and white.

load more comments (1 replies)
[–] [email protected] 37 points 3 weeks ago (4 children)

Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…

I half expect to next read an article about his ouster.

[–] [email protected] 10 points 3 weeks ago (1 children)

My theory is it's only a matter of time until the firing sprees generate enough backlog of actual work that isn't being realised by the minor productivity gains from AI until the investors start asking hard questions.

Maybe this is the start of the bubble bursting.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 35 points 3 weeks ago (2 children)

That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it's as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn't provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let's not just twist what he said to be "Microsoft CEO says AI provides no value" when that is not what he said.

[–] [email protected] 9 points 3 weeks ago

I think that's pretty clear to people who get past the clickbait. Oddly enough though, if you read through what he actually said, the takeaway is basically a tacit admission, interpreted as him trying to establish a level-set on expectations from AI without directly admitting the strategy of massively investing in LLM's is going bust and delivering no measurable value, so he can deflect with "BUT HEY CHECK OUT QUANTUM".

load more comments (1 replies)
[–] [email protected] 28 points 3 weeks ago

YES

YES

FUCKING YES! THIS IS A WIN!

Hopefully they curtail their investments and stop wasting so much fucking power.

[–] [email protected] 25 points 3 weeks ago (11 children)

That's because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!...I'd love a sling thong for my wife...bam! Here's 20 ads, just click to purchase since they already stole your wife's boob size and body measurements and preferred lingerie styles. And if you're on McMaster... Hmm I need a 1/2 pipe and a cap...Better get two caps in case you cross thread on.....ding dong! FBI! We know you're in there! Come out with your hands up!

load more comments (11 replies)
[–] [email protected] 19 points 3 weeks ago

He probably saw that softbank and masayoshi son were heavily investing in it and figured it was dead.

[–] [email protected] 13 points 3 weeks ago (5 children)

And crashing the markets in the process... At the same time they came out with a bunch of mambo jumbo and scifi babble about having a million qbit quantum chip.... 😂

load more comments (5 replies)
[–] [email protected] 13 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Is he saying it's just LLMs that are generating no value?

I wish reporters could be more specific with their terminology. They just add to the confusion.

Edit: he's talking about generative AI, of which LLMs are a subset.

load more comments (1 replies)
[–] [email protected] 13 points 3 weeks ago (2 children)

R&D is always a money sink

[–] [email protected] 22 points 3 weeks ago

Especially when the product is garbage lmao

[–] [email protected] 22 points 3 weeks ago (1 children)

It isn't R&D anymore if you're actively marketing it.

[–] [email protected] 12 points 3 weeks ago (5 children)

Uh... Used to be, and should be. But the entire industry has embraced treating production as test now. We sell alpha release games as mainstream releases. Microsoft fired QC long ago. They push out world breaking updates every other month.

And people have forked over their money with smiles.

load more comments (5 replies)
[–] [email protected] 12 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

microsoft rn:

✋ AI

👉 quantum

can't wait to have to explain the difference between asymmetric-key and symmetric-key cryptography to my friends!

load more comments (4 replies)
[–] [email protected] 10 points 3 weeks ago (6 children)

That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.

It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.

[–] [email protected] 29 points 3 weeks ago

For me personally, it's because it's been so aggressively shoved in my face in every context. I never asked for it, and I can't escape it. It actively gets in my way at work (github copilot) and has already re-enabled itself at least once. I'd be much happier to just let it exist if it would do the same for me.

[–] [email protected] 10 points 3 weeks ago* (last edited 3 weeks ago)

Because there’s already been multiple AI bubbles (eg, ELIZA - I had a lot of conversations with FREUD running on an Apple IIe). It’s also been falsely presented as basically “AGI.”

AI models trained to help doctors recognize cancer cells - great, awesome.

AI models used as the default research tool for every subject - very very very bad. It’s also so forced - and because it’s forced, I routinely see that it has generated absolute, misleading, horseshit in response to my research queries. But your average Joe will take that on faith, your high schooler will grow up thinking that Columbus discovered Colombia or something.

load more comments (4 replies)
load more comments
view more: next ›