this post was submitted on 25 Apr 2025
392 points (100.0% liked)

Technology

70081 readers
2741 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Archived link: https://archive.ph/Vjl1M

Here’s a nice little distraction from your workday: Head to Google, type in any made-up phrase, add the word “meaning,” and search. Behold! Google’s AI Overviews will not only confirm that your gibberish is a real saying, it will also tell you what it means and how it was derived.

This is genuinely fun, and you can find lots of examples on social media. In the world of AI Overviews, “a loose dog won't surf” is “a playful way of saying that something is not likely to happen or that something is not going to work out.” The invented phrase “wired is as wired does” is an idiom that means “someone's behavior or characteristics are a direct result of their inherent nature or ‘wiring,’ much like a computer's function is determined by its physical connections.”

It all sounds perfectly plausible, delivered with unwavering confidence. Google even provides reference links in some cases, giving the response an added sheen of authority. It’s also wrong, at least in the sense that the overview creates the impression that these are common phrases and not a bunch of random words thrown together. And while it’s silly that AI Overviews thinks “never throw a poodle at a pig” is a proverb with a biblical derivation, it’s also a tidy encapsulation of where generative AI still falls short.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 3 weeks ago* (last edited 3 weeks ago) (11 children)

Try this on your friends, make up an idiom, then walk up to them, say it without context, and then say "meaning?" and see how they respond.

Pretty sure most of mine will just make up a bullshit response nd go along with what I'm saying unless I give them more context.

There are genuinely interesting limitations to LLMs and the newer reasoning models, and I find it interesting to see what we can learn from them, this is just ham fisted robo gotcha journalism.

[–] [email protected] 3 points 3 weeks ago (1 children)

So, you have friends who are as stupid as an AI. Got it. What's your point?

load more comments (1 replies)
[–] [email protected] 3 points 3 weeks ago (1 children)

it highlights the fact that these LLMs refuse to say "I don't know", which essentially means we cannot rely on them for any factual reporting.

[–] [email protected] 1 points 3 weeks ago

But a) they don't refuse, most will tell you if you prompt them well them and b) you cannot rely on them as the sole source of truth but an information machine can still be useful if it's right most of the time.

load more comments (9 replies)
[–] [email protected] 5 points 3 weeks ago

This also works with asking it "why?" About random facts you make up.

[–] [email protected] 4 points 3 weeks ago (1 children)

It didn't work for me. Why not?

load more comments (1 replies)
[–] [email protected] 3 points 3 weeks ago (1 children)

Honestly, I’m kind of impressed it’s able to analyze seemingly random phrases like that. It means its thinking and not just regurgitating facts. Because someday, such a phrase could exist in the future and AI wouldn’t need to wait for it to become mainstream.

[–] [email protected] 1 points 3 weeks ago

It's not thinking. It's just spicy autocomplete; having ingested most of the web, it "knows" that what follows a question about the meaning of a phrase is usually the definition and etymology of that phrase; there aren't many examples online of anyone asking for the definition of a phrase and being told "that doesn't exist, it's not a real thing." So it does some frequency analysis (actually it's probably more correct to say that it is frequency analysis) and decides what the most likely words to come after your question are, based on everything it's been trained on.

But it doesn't actually know or think anything. It just keeps giving you the next expected word until it meets its parameters.

[–] [email protected] 3 points 3 weeks ago

One arm hair in the hand is better than two in the bush

[–] [email protected] 1 points 3 weeks ago

I for one will not be putting any gibberish into Google's AI for any reason. I don't find it fun. I find it annoying and have taken steps to avoid it completely on purpose. I don't understand these articles that want to throw shade at AI LLM's by suggesting their viewers go use the LLM's which only helps the companies that own the LLM's.

Like. Yes. We have established that LLM's will give misinformation and create slop because all their data sets are tainted. Do we need to continue to further this nonsense?

load more comments
view more: ‹ prev next ›