In defense of people who say LLMs are not intelligent: they probably mean to say they are not sapient, and I think they're loosely correct if you consider the literal word "intelligent" to have a different meaning from the denotative "Intelligence" in the context of Artificial Intelligence.
'Intelligence' requires understanding, understanding requires awareness. This is not seen in anything called "AI", not today at least, but maybe not ever. Again, why not use a different word, one that actually applies to these advanced calculators? Expecting the best out of humanity, it may be because of the appeal of the added pizzazz and the excitement that comes with it or simple semantic confusion... but seeing the people behind it all, it probably is so the dummies get overly excited and buy stuff/make these bots integral parts of their lives. 🤷
The term "Artificial Intelligence" has been around for a long time, 25 years ago AI was an acceptable name for NPC logic in videogames. Arguably that's still the case, and personally I vastly prefer "Artificial Intelligence" to "Broad Simulation Of Common Sense Powered By Von Neumann Machines".
The overuse (and overtrust) of LLMs has made me feel ashamed to reference video game NPCs as AI and I hate it. There was nothing wrong with it. We all understood the ability of the AI to be limited to specific functions. I loved when Forza Horizon introduced "drivatar" AI personalities of actual players, resembling their actual activities. Now it's a vomit term for shady search engines and confused visualizers.
I don't share the feeling. I'll gladly tie a M$ shareholder to a chair, force them to watch me play Perfect Dark, and say "man I love these AI settings, I wish they made AI like they used to".
“Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.
You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.
Do you think "AI" KNOWS/UNDERSTANDS what a grammatically incorrect sentence is or what molecules even are? How?
You’re moving the goalposts. First you claimed understanding requires awareness, now you’re asking whether an AI knows what a molecule is - as if that’s even the standard for functional intelligence.
No, AI doesn’t “know” things the way a human does. But it can still reliably identify ungrammatical sentences or predict molecular interactions based on training data. If your definition of “understanding” requires some kind of inner experience or conscious grasp of meaning, then fine. But that’s a philosophical stance, not a technical one.
The point is: you don’t need subjective awareness to model relationships in data and produce useful results. That’s what modern AI does, and that's enough to call it intelligent in the functional sense - whether or not it “knows” anything in the way you'd like it to.
Intelligence, as the word has always been used, requires awareness and understanding, not just spitting out data after input, as dynamic and complex the process might be, through a set of rules. AI, as you just described it, does nothing necessarily different from other computational tools: they speed up processes that can be calculated/algorithmically structured. I don't see how that particularly makes "AI" deserving of the adjective 'intelligent', it seems more of a marketing term the same way 'smartphones' were. The disagreement we're having here is semantic...
The funny thing is, is that the goalposts on what is/isn't intelligent has always shifted in the AI world
Being good at chess used to be a symbol of high intelligence. Now? Computer software can beat the best chess players in a fraction of the time used to think, 100% of the time, and we call that just an algorithm
This is not how intelligence has always been used. Moreover, we don't even have a full understand of what intelligence is
And as a final note, human brains are also computational "tools". As far as we can tell, there's nothing fundamentally different between a brain and a theoretical Turing machine
And in a way, isn't what we "spit" out also data? Specifically data in the form of nerve output and all the internal processing that accompanies it?
Do most humans understand what molecules are? How?
Everything I know about molecules I got from textbooks. Am I just regurgitating my "training data" without understanding? How does one really understand molecules?
I remember when “heuristics” were all the rage. Frankly that’s what LLMs are, advanced heuristics. “Intelligence” is nothing more than marketing bingo.
Usually the reason we want people to stop calling LLMs AI is because there has been a giant marketing machine constructed designed to (and successfully) tricking laymen into believing that LLMs are adjacent to and one tiny breakthrough away from becoming AGI.
From another angle, your statement that AI is not a specific term is correct. Why, then, should we keep using it in common parlance when it just serves to confuse laymen? Let's just use the more specific terms.
So... not intelligent. In the sense that when someone without enough knowledge of computers and/or LLMs hears "LLM is intelligent" and sees "an LLM tells me X", they will be likely to believe that X is true, and not without a reason. Exactly this is my main reason against all the use of intelligence-related terms. When spoken by knowledgeable people who do know the difference - yeah, I am all for that. But first we need to cut the crap of advertisement and hype
"Intelligent" is itself a highly unspecific term which covers quite a lot of different things.
What you're think is "reasoning" or "rationalizing", and LLMs can't do that at all.
However what LLMs (and most Machine Learning implementations) can do is "pattern matching" which is also an element of intelligence: it's what gives us and most animals the ability to recognize things such as food or predators without actually thinking about it (you just see, say, a cat, and you know without thinking that it's a cat even though cats don't all look the same), plus in humans it's also what's behind intuition.
PS: Way back since when they were invented over 3 decades ago, Neural Networks and other Machine Learning technologies were already very good at finding patterns in their training data - often better than humans.
The evolution of the technology has added to it the capability of creating content which follows those patterns, giving us things like LLMs or image generation.
However what has been made clear by LLMs is that using patterns alone (plus a little randomness to vary the results) in generating textual content is not enough to create useful content beyond entertainment, and that's exactly because LLMs can't rationalize. However, the original pattern matching stuff without the content generation is still widely used and very successfully so, in things from OCR to image recognition.
So… not intelligent.
But they are intelligent - just not in the way people tend to think.
There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.
But they are intelligent - just not in the way people tend to think.
Doesn't that just degenerate into a debate over semantics though? Ie what is "intelligence".
Not having a go, this is a good thread, and useful I think 👍
Yes, and that has always been the debate
But the short answer is that we don't really have a good grasp at what intelligence is, so it is all semantics in the end
They ain't intelligent
Great point, thank you:)
What they’re not designed to do is give factual answers
or mental health therapy
And "intelligence" itself isn't very well defined either. So the only word that remains is "artificial", and we can agree on that.
I usually try to avoid the word "AI". I'll say "LLM" if I talk about chatbots, ChatGPT etc. Or I use the term "machine learning" when broadly speaking about the concept of computers learning and doing such things. It's not exactly the same thing, though. But when reading other people's texts I always think of LLMs when they say AI, because that's currently what they mean almost every time. And AGI is more sci-fi as of now, so it needs some disclaimers and context anyway.
In computer science, the term AI at its simplest just refers to a system capable of performing any cognitive task typically done by humans.
That said, you’re right in the sense that when people say “AI” these days, they almost always mean generative AI - not AI in the broader sense.
To add to the confusion, you also have people out there thinking it's "Al" or "A1". It's a real mess.
I can't wait to see what A2 can do!
We've been waiting for that since 1824!
Really? Like the steak sauce? I guess I should have seen that coming since the 00s motorcycle communities keep asking about their F1 light. Fuel 1njection
Nobody in a position of any importance, just the US Secretary of Education Linda McMahon.
I still think intelligence is a marketing term or simply a misnomer. It's basically an advanced calculator. Intelligence questions, creates rules from nothing, transforms raw data from reality into ideas, has its own volition... And the same goes for a chess engine, of course, it's just more visible because it's not spitting out text but chess moves. Intelligence and consciousness don't seem to be computational processes.
I could follow everything you said up until the conclusion. If consciousness is not computational, then what is going on in our brains instead? I know of course that even neuroscientists don't know exactly, but just in broad principle. I always thought our brains are still doing computation, just with a different method to computers. I don't mean to be contrarian, I'm just genuenly curious what other kind of process could support consciousness?
You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.
A calculator doesn’t qualify because it runs "fixed code" with no learning or generalization. There's no flexibility to it. It can't adapt.
AGI itself has been made up as a marketing term by LLM companies.
Let's not forget that the official definition of AGI is that it can make 200 billion dollars.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
None of this is AI if it doesn't have the ability to become self-aware.
Consciousness - or “self-awareness” - has never been a requirement for something to qualify as artificial intelligence. It’s an important topic about AI, sure, but it’s a separate discussion entirely. You don’t need self-awareness to solve problems, learn patterns, or outperform humans at specific tasks - and that’s what intelligence, in this context, actually means.
It's not really solving problems or learning patterns now, is it? I don't see it getting past any captchas or answering health questions accurately, so we're definitely not there.
If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.
The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You're blaming the hammer for not turning screws.
Fair point 😅
That's not quite right, the discussion of consciousness, mind, and reasoning are all relevant and have been in the philosophy of artificial intelligence for hundreds of years. You are valid to call it AI within your definitions but those are not exactly agreed on, such as whether you ascribe to Alan Turing or John Searle, for example
Very good explanation. And important distinctions.
There's also a philosophical definition, which I think is hotly contested so depending on your school of thought your belief of is LLM AI can vary. Usually many people take issue with the thought over questions like does it have a mind, think, or have consciousness?
What would you call systems that are used for discovery of new drugs or treatments? For example, companies using "AI" for Parkinson's research.
Both that and LLMs fall under the umbrella of machine learning, but they branch in different directions. LLMs are optimized for generating language, while the systems used in drug discovery focus on pattern recognition, prediction, and simulations. Same foundation - different tools for different jobs.
You Should Know
YSK - for all the things that can make your life easier!
The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:
Rules (interactive)
Rule 1- All posts must begin with YSK.
All posts must begin with YSK. If you're a Mastodon user, then include YSK after @youshouldknow. This is a community to share tips and tricks that will help you improve your life.
Rule 2- Your post body text must include the reason "Why" YSK:
**In your post's text body, you must include the reason "Why" YSK: It’s helpful for readability, and informs readers about the importance of the content. **
Rule 3- Do not seek mental, medical and professional help here.
Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.
Rule 4- No self promotion or upvote-farming of any kind.
That's it.
Rule 5- No baiting or sealioning or promoting an agenda.
Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.
Rule 6- Regarding non-YSK posts.
Provided it is about the community itself, you may post non-YSK posts using the [META] tag on your post title.
Rule 7- You can't harass or disturb other members.
If you harass or discriminate against any individual member, you will be removed.
If you are a member, sympathizer or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people and you were provably vocal about your hate, then you will be banned on sight.
For further explanation, clarification and feedback about this rule, you may follow this link.
Rule 8- All comments should try to stay relevant to their parent content.
Rule 9- Reposts from other platforms are not allowed.
Let everyone have their own content.
Rule 10- The majority of bots aren't allowed to participate here.
Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.
Rule 11- Posts must actually be true: Disiniformation, trolling, and being misleading will not be tolerated. Repeated or egregious attempts will earn you a ban. This also applies to filing reports: If you continually file false reports YOU WILL BE BANNED! We can see who reports what, and shenanigans will not be tolerated.
If you file a report, include what specific rule is being violated and how.
Partnered Communities:
You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.
Community Moderation
For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.
Credits
Our icon(masterpiece) was made by @clen15!