this post was submitted on 01 Jun 2025
285 points (100.0% liked)

Technology

71271 readers
4442 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I found the aeticle in a post on the fediverse, and I can't find it anymore.

The reaserchers asked a simple mathematical question to an LLM ( like 7+4) and then could see how internally it worked by finding similar paths, but nothing like performing mathematical reasoning, even if the final answer was correct.

Then they asked the LLM to explain how it found the result, what was it's internal reasoning. The answer was detailed step by step mathematical logic, like a human explaining how to perform an addition.

This showed 2 things:

  • LLM don't "know" how they work

  • the second answer was a rephrasing of original text used for training that explain how math works, so LLM just used that as an explanation

I think it was a very interesting an meaningful analysis

Can anyone help me find this?

EDIT: thanks to @theunknownmuncher @lemmy.world https://www.anthropic.com/research/tracing-thoughts-language-model its this one

EDIT2: I'm aware LLM dont "know" anything and don't reason, and it's exactly why I wanted to find the article. Some more details here: https://feddit.it/post/18191686/13815095

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 80 points 1 week ago (15 children)

Can’t help but here’s a rant on people asking LLMs to “explain their reasoning” which is impossible because they can never reason (not meant to be attacking OP, just attacking the “LLMs think and reason” people and companies that spout it):

LLMs are just matrix math to complete the most likely next word. They don’t know anything and can’t reason.

Anything you read or hear about LLMs or “AI” getting “asked questions” or “explain its reasoning” or talking about how they’re “thinking” is just AI propaganda to make you think they’re doing something LLMs literally can’t do but people sure wish they could.

In this case it sounds like people who don’t understand how LLMs work eating that propaganda up and approaching LLMs like there’s something to talk to or discern from.

If you waste egregiously high amounts of gigawatts to put everything that’s ever been typed into matrices you can operate on, you get a facsimile of the human knowledge that went into typing all of that stuff.

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

TLDR; LLMs can never think or reason, anyone talking about them thinking or reasoning is bullshitting, they utilize almost everything that’s ever been typed to give (occasionally) reasonably useful outputs that are the most basic bitch shit because that’s the most likely next word at the cost of environmental disaster

[–] [email protected] 20 points 1 week ago (2 children)

People don't understand what "model" means. That's the unfortunate reality.

[–] [email protected] 14 points 1 week ago (1 children)

They walk down runways and pose for magazines. Do they reason? Sometimes.

[–] [email protected] 11 points 1 week ago (1 children)
load more comments (1 replies)
[–] [email protected] 7 points 1 week ago

Yeah. That's because peoples unfortunate reality is a "model".

[–] [email protected] 18 points 1 week ago* (last edited 1 week ago) (6 children)

It's true that LLMs aren't "aware" of what internal steps they are taking, so asking an LLM how they reasoned out an answer will just output text that statistically sounds right based on its training set, but to say something like "they can never reason" is provably false.

Its obvious that you have a bias and desperately want reality to confirm it, but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

EDIT: lol you can downvote me but it doesn't change evidence based research

It’d be impressive if the environmental toll making the matrices and using them wasn’t critically bad.

Developing a AAA video game has a higher carbon footprint than training an LLM, and running inference uses significantly less power than playing that same video game.

[–] [email protected] 14 points 1 week ago (3 children)

Too deep on the AI propaganda there, it’s completing the next word. You can give the LLM base umpteen layers to make complicated connections, still ain’t thinking.

The LLM corpos trying to get nuclear plants to power their gigantic data centers while AAA devs aren’t trying to buy nuclear plants says that’s a straw man and you simultaneously also are wrong.

Using a pre-trained and memory-crushed LLM that can run on a small device won’t take up too much power. But that’s not what you’re thinking of. You’re thinking of the LLM only accessible via ChatGPT’s api that has a yuge context length and massive matrices that needs hilariously large amounts of RAM and compute power to execute. And it’s still a facsimile of thought.

It’s okay they suck and have very niche actual use cases - maybe it’ll get us to something better. But they ain’t gold, they ain't smart, and they ain’t worth destroying the planet.

load more comments (3 replies)
[–] [email protected] 6 points 1 week ago (1 children)

but there's been significant research and progress in tracing internals of LLMs, that show logic, planning, and reasoning.

would there be a source for such research?

[–] [email protected] 10 points 1 week ago (1 children)
[–] [email protected] 8 points 1 week ago (1 children)

but this article espouses that llms do the opposite of logic, planning, and reasoning?

quoting:

Claude, on occasion, will give a plausible-sounding argument designed to agree with the user rather than to follow logical steps. We show this by asking it for help on a hard math problem while giving it an incorrect hint. We are able to “catch it in the act” as it makes up its fake reasoning,

are there any sources which show that llms use logic, conduct planning, and reason (as was asserted in the 2nd level comment)?

[–] [email protected] 10 points 1 week ago

No, you're misunderstanding the findings. It does show that LLMs do not explain their reasoning when asked, which makes sense and is expected. They do not have access to their inner-workings and generate a response that "sounds" right, but tracing their internal logic shows they operate differently than what they claim, when asked. You can't ask an LLM to explain its own reasoning. But the article shows how they've made progress with tracing under-the-hood, and the surprising results they found about how it is able to do things like plan ahead, which defeats the misconception that it is just "autocomplete"

load more comments (4 replies)
[–] [email protected] 13 points 1 week ago (1 children)

How would you prove that someone or something is capable of reasoning or thinking?

[–] [email protected] 10 points 1 week ago (2 children)

You can prove it’s not by doing some matrix multiplication and seeing its matrix multiplication. Much easier way to go about it

[–] [email protected] 20 points 1 week ago* (last edited 1 week ago) (9 children)

Yes, neural networks can be implemented with matrix operations. What does that have to do with proving or disproving the ability to reason? You didn't post a relevant or complete thought

Your comment is like saying an audio file isn't really music because it's just a series of numbers.

load more comments (9 replies)
[–] [email protected] 7 points 1 week ago (2 children)

People that can not do Matrix multiplication do not possess the basic concepts of intelligence now? Or is software that can do matrix multiplication intelligent?

[–] [email protected] 2 points 6 days ago (1 children)

So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

[–] [email protected] 1 points 6 days ago

Well, on the other hand. Meat bags can't really do neuron stuff either, despite that is essential for any meat bag operation. Humans are still here though and so are dogs.

load more comments (1 replies)
[–] [email protected] 11 points 1 week ago (1 children)

I've read that article. They used something they called an "MRI for AIs", and checked e.g. how an AI handled math questions, and then asked the AI how it came to that answer, and the pathways actually differed. While the AI talked about using a textbook answer, it actually did a different approach. That's what I remember of that article.

But yes, it exists, and it is science, not TicTok

load more comments (1 replies)
[–] [email protected] 6 points 1 week ago (1 children)

The environmental toll doesn’t have to be that bad. You can get decent results from single high-end gaming GPU.

[–] [email protected] 1 points 6 days ago

You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.

Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp

[–] [email protected] 6 points 1 week ago

It's a developer option that isn't generally available on consumer-facing products. It's literally just a debug log that outputs the steps to arrive at a response, nothing more.

It's not about novel ideation or reasoning (programmatic neural networks don't do that), but just an output of statistical data that says "Step was 90% certain, Step 2 was 89% certain...etc"

load more comments (9 replies)
[–] [email protected] 79 points 1 week ago (2 children)
[–] [email protected] 21 points 1 week ago

Oh wow thank you! That's it!

I didn't even remember now good this article was and how many experiments it collected

[–] [email protected] 6 points 1 week ago

Here's a book for a different audience. Explains in layman terms why to be wary about this tech, https://thebullshitmachines.com/

[–] [email protected] 50 points 1 week ago (1 children)

I don't know how I work. I couldn't tell you much about neuroscience beyond "neurons are linked together and somehow that creates thoughts". And even when it comes to complex thoughts, I sometimes can't explain why. At my job, I often lean on intuition I've developed over a decade. I can look at a system and get an immediate sense if it's going to work well, but actually explaining why or why not takes a lot more time and energy. Am I an LLM?

[–] [email protected] 20 points 1 week ago (13 children)

I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do. Even if we analysed the human brain it would look like wires connected to wires with different resistances all over the place with some other chemical influences.

I think everyone forgets that nural networks were used in AI to replicate how animal brains work, and clearly if it worked for us to get smart then it should work for something synthetic. Well we've certainly answered that now.

Everyone being like "oh it's just a predictive model and it's all math and math can't be intelligent" are questioning exactly how their own brains work. We are just prediction machines, the brain releases dopamine when it correctly predicts things, it self learns from correctly assuming how things work. We modelled AI off of ourselves. And if we don't understand how we work, of course we're not gonna understand how it works.

[–] [email protected] 21 points 1 week ago (2 children)

They work the exact same way we do.

Two things being difficult to understand does not mean that they are the exact same.

load more comments (2 replies)
[–] [email protected] 20 points 1 week ago (1 children)

LLMs among other things lack the whole neurotransmitter "live" regulation aspect and plasticity of the brain.

We are nowhere near a close representation of actual brains. LLMs to brains are like a horse carriage compared to modern cars. Yes they have four wheels and they move, and cars also need four wheels and move, but that is far from being close to each other.

[–] [email protected] 6 points 1 week ago

So LLMs are like a human with anterograde amnesia. They're like Dory.

[–] [email protected] 16 points 1 week ago (3 children)

I agree. This is the exact problem I think people need to face with nural network AIs. They work the exact same way we do.

I don't think this is a fair way of summarizing it. You're making it sound like we have AGI, which we do not have AGI and we may never have AGI.

load more comments (3 replies)
[–] [email protected] 15 points 1 week ago (1 children)

Even if LLM "neurons" and their interconnections are modeled to the biological ones, LLMs aren't modeled on human brain, where a lot is not understood.

The first thing is that how the neurons are organized is completely different. Think about the cortex and the transformer.

Second is the learning process. Nowhere close.

The fact explained in the article about how we do math, through logical steps while LLMs use resemblance is a small but meaningful example. And it also shows that you can see how LLMs work, it's just very difficult

load more comments (1 replies)
load more comments (9 replies)
[–] [email protected] 22 points 1 week ago (1 children)

By design, they don't know how they work. It's interesting to see this experimentally proven, but it was already known. In the same way the predictive text function on your phone keyboard doesn't know how it works.

[–] [email protected] 17 points 1 week ago (1 children)

I'm aware of this and agree but:

  • I see that asking how an LLM got to their answers as a "proof" of sound reasoning has become common

  • this new trend of "reasoning" models, where an internal conversation is shown in all its steps, seems to be based on this assumption of trustable train of thoughts. And given the simple experiment I mentioned, it is extremely dangerous and misleading

  • take a look at this video: https://youtube.com/watch?v=Xx4Tpsk_fnM : everything is based on observing and directing this internal reasoning, and these guys are computer scientists. How can they trust this?

So having a good written article at hand is a good idea imho

load more comments (1 replies)
[–] [email protected] 13 points 1 week ago

It's the anthropic article you are looking for, where they performed open brain surgery equivalent to find out that they do maths in very strange and eerily humanlike operations, like they will estimate, then if it goes over calculate the last digit like I do. It sucks as a counting technique though

[–] [email protected] 11 points 1 week ago* (last edited 1 week ago)

Define "know".

  • An LLM can have text describing how it works and be trained on that text and respond with an answer incorporating that.

  • LLMs have no intrinsic ability to "sense" what's going on inside them, nor even a sense of time. It's just not an input to their state. You can build neural-net-based systems that do have such an input, but ChatGPT or whatever isn't that.

  • LLMs lack a lot of the mechanisms that I would call essential to be able to solve problems in a generalized way. While I think Dijkstra had a valid point:

    The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

    ...and we shouldn't let our prejudices about how a mind "should" function internally cloud how we treat artificial intelligence...it's also true that we can look at an LLM and say that it just fundamentally doesn't have the ability to do a lot of things that a human-like mind can. An LLM is, at best, something like a small part of our mind. While extracting it and playing with it in isolation can produce some interesting results, there's a lot that it can't do on its own: it won't, say, engage in goal-oriented behavior. Asking a chatbot questions that require introspection and insight on its part won't yield interesting result, because it can't really engage in introspection or insight to any meaningful degree. It has very little mutable state, unlike your mind.

load more comments
view more: next ›