this post was submitted on 25 May 2024
831 points (100.0% liked)

Technology

69109 readers
2823 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google rolled out AI overviews across the United States this month, exposing its flagship product to the hallucinations of large language models.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 11 months ago (5 children)

They're not hallucinations. People are getting very sloppy with terminology. Google's AI is summarizing the content of web pages that search is returning, if there's weird stuff in there then that shows up in the summary.

[–] [email protected] 19 points 11 months ago* (last edited 11 months ago)

You're right that they arent hallucinations.
The current issue isn't just summarized web page, its that the model for gemini is all of reddit. And because it only fakes understanding context, it takes shitposts as truth.

The trap is that reddit is maybe 5% useful content and the rest is shitposts and porn.

[–] [email protected] 10 points 11 months ago (2 children)

And a lot of that content is probably an AI generated hallucination.

[–] [email protected] 12 points 11 months ago (1 children)

Most of what I've seen in the news so far is due to content based on shitposts from reddit, which is even funnier imo

[–] [email protected] 7 points 11 months ago

I do dislike when the “actual news” starts bringing up social media reactions. Can you imagine a whole show based on the Twitter burns of this week? … it would probably be very popular. 😭

[–] [email protected] 5 points 11 months ago* (last edited 11 months ago) (1 children)

Absolutely. I wrote about this a while back in an essay:

Prime and Mash / Kuru

Basically likening it to a prion disease like Kuru, which humans get from eating the infected brains of other humans.

[–] [email protected] 3 points 11 months ago

Anyone who puts something in their coffee, makes it not coffee, and should try another caffeinated beverage!!

[–] [email protected] 2 points 11 months ago* (last edited 11 months ago) (1 children)

LLMs do sometimes hallucinate even when giving summaries. I.e. they put things in the summaries that were not in the source material. Bing did this often the last time I tried it. In my experience, LLMs seem to do very poorly when their context is large (e.g. when "reading" large or multiple articles). With ChatGPT, it's output seems more likely to be factually correct when it just generates "facts" from it's model instead of "browsing" and adding articles to its context.

[–] [email protected] 1 points 11 months ago

I asked ChatGPT who I was not too long ago. I have a unique name and I have many sources on the internet with my name on it (I'm not famous, but I've done a lot of stuff) and it made up a multi-paragraph biography of me that was entirely false.

I would sure as hell call that a hallucination because there is no question it was trained on my name if it was trained on the internet in general but it got it entirely wrong.

Curiously, now it says it doesn't recognize my name at all.

[–] [email protected] 1 points 10 months ago* (last edited 10 months ago)

Sad how this comment gets downvoted, despite making a reasonable argument.

This comment section appears deeply partisan: If you say something along the lines of “Boo Google, AI is bad”, you get upvotes. And if you do not, you find yourself in the other camp. Which gets downvoted.

The actual quality of the comment, like this one, which states a clever observation, doesn’t seem to matter.