this post was submitted on 19 Mar 2025
245 points (100.0% liked)
Technology
67151 readers
3794 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think it's just a question of whether AGI can exist. I think AGI is possible, but I don't think current LLMs can be considered sentient. But I'm also not sure how I'd draw a line between something that is sentient and something that isn't (or something that "writes" rather than "generates"). That's kinda why I asked in the first place. I think it's too easy to say "this program is not sentient because we know that everything it does is just math; weights and values passing through layered matrices; it's not real thought". I haven't heard any good answers to why numbers passing through matrices isn't thought, but electrical charges passing through neurons is.
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse's corpse.
LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical models to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they're not going to save the world for us.
They could be a part in a more complicated system that forms an AGI. There's nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it's not a stochastic parrot.
I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.
Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that "conversations" with an LLM are really just including all previous parts of the conversation in a new prompt. Isn't this analagous to short term memory? Now suppose you were to take all of an LLM's conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There's no technical reason that this can't be done, although in practice it's computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
I'll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).
As one with AuADHD, I think a good deal about short-term and working memory. I would say "yes and no". It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with "storing". The chat session is more like RAM than short-term memory that we see in biological systems.
Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it's a dead-end towards an AGI. As a component of a system, it becomes much more promising.
This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it's a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn't impact that for the individual or the LLM. It's a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they're just trapped in a hellish sliding window of temporal consciousness).
Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.
Cherry-picking a couple of points I want to respond to together
I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don't think you'd say that that person wasn't sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.
It sounds to me like you're more strict about what you'd consider to be "the LLM" than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone's brain is sentient.
I'm not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.
My apologies if it seems "nit-picky". Not my intent. Just that, to my brain, the difference in semantic meaning is very important.
In my thinking, that's exactly what asking "can an LLM achieve sentience?" is, so, I can see the confusion. Because I am strict in classification, it is, to me, literally line asking "can the parahippocampal gyrus achieve sentience?" (probably not by itself - though our meat-computers show extraordinary plasticity... so, maybe?).
Precisely. And I suspect that it is very much related to the constrained context available to any language model. The world, and thought as we know it, is mostly not language. Not everyone has an internal monologue that is verbal/linguistic (some don't even have one and mine tends to be more abstract when not in the context of verbal things) so, it follows that more than linguistic analysis is necessary.
That's precisely what I meant.
I'm a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don't think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that 'specialness' that humans possess.
Because they don't believe human consciousness is a purely natural phenomenon, they don't believe it can be replicated by natural processes. In other words, they don't believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.
And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.