this post was submitted on 13 Mar 2025
1888 points (100.0% liked)

People Twitter

7576 readers
386 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 3 months ago

I have frequentley seen gpt give a wrong answer to a question, get told that its incorrect, and the bot fights with me and insists Im wrong. and on other less serious matters Ive seen it immediatley fold and take any answer I give it as "correct"

[–] [email protected] 4 points 3 months ago

Exactly my thoughts.

[–] [email protected] 4 points 3 months ago* (last edited 3 months ago) (5 children)

If you want an AI to be an expert, you should only feed it data from experts. But these are trained on so much more. So much garbage.

load more comments (5 replies)
[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (4 children)

Oof let's see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to "give practical examples where the time/accuracy tradeoff in Flink is important" it can't come up with more than 1 or 2 truly distinct examples) but it's never wrong.

The only times it's blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn't make sense and prod followups.

Tl;dr funny meme, would be funnier if true

[–] [email protected] 3 points 3 months ago

I ask AI shitbots technical questions and get wrong answers daily. I said this in another comment, but I regularly have to ask it if what it gave me was actually real.

Like, asking copilot about Powershell commands and modules that are by no means obscure will cause it to hallucinate flags that don't exist based on the prompt. I give it plenty of context on what I'm using and trying to do, and it makes up shit based on what it thinks I want to hear.

load more comments (3 replies)
[–] [email protected] 3 points 3 months ago

I think that AI has now reached the point where it can deceive people ,not equal to humanity.

[–] [email protected] 3 points 3 months ago

same with every documentary out there

[–] [email protected] 3 points 3 months ago

does chat gpt have ADHD?

[–] [email protected] 3 points 3 months ago

i mainly use it for fact checking sources from the internet and looking for bias. i double check everything of course. beyond that its good for rule checking for MTG commander games, and deck building. i mainly use it for its search function.

[–] [email protected] 2 points 3 months ago

Exactly this is why I have a love/hate relationship with just about any LLM.

I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn't made some random mistake.

I'm not entirely sure we're going to resolve this without additional technology, outside of 'the LLM'-itself.

load more comments
view more: ‹ prev next ›