this post was submitted on 13 Mar 2025
1882 points (100.0% liked)

People Twitter

6927 readers
1945 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 1 month ago* (last edited 1 month ago)

come on guys, the joke is right there.... 60% of the time it works, every time!

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (28 children)

This, but for Wikipedia.

Edit: Ironically, the down votes are really driving home the point in the OP. When you aren't an expert in a subject, you're incapable of recognizing the flaws in someone's discussion, whether it's an LLM or Wikipedia. Just like the GPT bros defending the LLM's inaccuracies because they lack the knowledge to recognize them, we've got Wiki bros defending Wikipedia's inaccuracies because they lack the knowledge to recognize them. At the end of the day, neither one is a reliable source for information.

[–] [email protected] 4 points 1 month ago

This, but for all media.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

The obvious difference being that Wikipedia has contributors cite their sources, and can be corrected in ways that LLMs are flat out incapable of doing

Really curious about anything Wikipedia has wrong though. I can start with something an LLM gets wrong constantly if you like

load more comments (26 replies)
[–] [email protected] 4 points 1 month ago

Exactly my thoughts.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (5 children)

If you want an AI to be an expert, you should only feed it data from experts. But these are trained on so much more. So much garbage.

load more comments (5 replies)
[–] [email protected] 3 points 1 month ago* (last edited 1 month ago) (4 children)

Oof let's see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to "give practical examples where the time/accuracy tradeoff in Flink is important" it can't come up with more than 1 or 2 truly distinct examples) but it's never wrong.

The only times it's blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn't make sense and prod followups.

Tl;dr funny meme, would be funnier if true

[–] [email protected] 3 points 1 month ago

I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don't exist based on the prompt. I give it plenty of context on what I'm using and trying to do, and it makes up shit based on what it thinks I want to hear.

load more comments (3 replies)
[–] [email protected] 3 points 1 month ago

I think that AI has now reached the point where it can deceive people ,not equal to humanity.

[–] [email protected] 3 points 1 month ago

same with every documentary out there

[–] [email protected] 3 points 1 month ago

does chat gpt have ADHD?

[–] [email protected] 3 points 1 month ago

i mainly use it for fact checking sources from the internet and looking for bias. i double check everything of course. beyond that its good for rule checking for MTG commander games, and deck building. i mainly use it for its search function.

[–] [email protected] 2 points 1 month ago

Exactly this is why I have a love/hate relationship with just about any LLM.

I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn't made some random mistake.

I'm not entirely sure we're going to resolve this without additional technology, outside of 'the LLM'-itself.

load more comments
view more: ‹ prev next ›