this post was submitted on 22 Feb 2024
508 points (100.0% liked)

Technology

69726 readers
3251 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (1 children)

It isn't reasoning about anything. A human did the reasoning at some point, and the LLM's dataset includes that original information. The LLM is simply matching your prompt to that training data. It's not doing anything else. It's not thinking about the question you asked it. It's a glorified keyword search.

It's obvious you have no idea how LLMs work at a fundamental level, yet you keep talking about them like you're an expert.

[–] [email protected] 2 points 1 year ago (1 children)

So if I find a single example of an AI doing a reasoning task that's not in its training material, would you agree that you're wrong and AI does reason?

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

You won't find one. LLMs are literally incapable of the kind of reasoning you're talking about. All of their solutions are based on training data, no matter how "original" your problem might seem.

[–] [email protected] 2 points 1 year ago

You didn't answer my question.