this post was submitted on 19 Mar 2025
245 points (100.0% liked)

Technology

67338 readers
3760 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

While I am glad this ruling went this way, why'd she have diss Data to make it?

To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:

"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.

I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."

Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 15 points 4 days ago* (last edited 4 days ago) (21 children)

Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.

AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.

If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.

The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.

And when they start hallucinating, it’s because they don’t understand how they sound, and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.

[–] [email protected] 9 points 4 days ago (19 children)

So, I will grant that right now humans are better writers than LLMs. And fundamentally, I don't think the way that LLMs work right now is capable of mimicking actual human writing, especially as the complexity of the topic increases. But I have trouble with some of these kinds of distinctions.

So, not to be pedantic, but:

AI can’t create something all on its own from scratch like a human. It can only mimic the data it has been trained on.

Couldn't you say the same thing about a person? A person couldn't write something without having learned to read first. And without having read things similar to what they want to write.

LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent.

This is kind of the classic chinese room philosophical question, though, right? Can you prove to someone that you are intelligent, and that you think? As LLMs improve and become better at sounding like a real, thinking person, does there come a point at which we'd say that the LLM is actually thinking? And if you say no, the LLM is just an algorithm, generating probabilities based on training data or whatever techniques might be used in the future, how can you show that your own thoughts aren't just some algorithm, formed out of neurons that have been trained based on data passed to them over the course of your lifetime?

And when they start hallucinating, it’s because they don’t understand how they sound...

People do this too, though... It's just that LLMs do it more frequently right now.

I guess I'm a bit wary about drawing a line in the sand between what humans do and what LLMs do. As I see it, the difference is how good the results are.

[–] [email protected] 4 points 4 days ago (7 children)

I would do more research on how they work. You’ll be a lot more comfortable making those distinctions then.

[–] [email protected] 6 points 4 days ago (1 children)

I'm a software developer, and have worked plenty with LLMs. If you don't want to address the content of my post, then fine. But "go research" is a pretty useless answer. An LLM could do better!

[–] [email protected] 2 points 4 days ago (1 children)

Then you should have an easier time than most learning more. Your points show a lack of understanding about the tech, and I don’t have the time to pick everything you said apart to try to convince you that LLMs do not have sentience.

[–] [email protected] 3 points 4 days ago (1 children)

"You're wrong, but I'm just too busy to say why!"

Still useless.

[–] [email protected] 2 points 4 days ago (1 children)

It might surprise you to know that you’re not entitled to a free education from me. Your original query of “What’s the difference?” is what I responded to willingly. Your philosophical exploration of the nature of intelligence is not in the same ballpark.

I’ve done vibe coding too, enough to understand that the LLMs don’t think.

https://arstechnica.com/science/2023/07/a-jargon-free-explanation-of-how-ai-large-language-models-work/

[–] [email protected] 5 points 4 days ago (1 children)

Sure, I'm not entitled to anything. And I appreciate your original reply. I'm just saying that your subsequent comments have been useless and condescending. If you didn't have time to discuss further then... you could have just not replied.

load more comments (5 replies)
load more comments (16 replies)
load more comments (17 replies)