this post was submitted on 30 Jun 2025
240 points (100.0% liked)

TechTakes

2030 readers
215 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 22 points 5 days ago (15 children)

AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it's a scam. When you claim you can tell good from bad, they think you're lying.

[–] [email protected] 9 points 5 days ago (9 children)
  • They string words together based on the probability of one word following another.
  • They are heavily promoted by people that don't know what they're doing.
  • They're wrong 70% of the time but promote everything they say as truth.
  • Average people have a hard time telling when they're wrong.

In other words, AIs are BS automated BS artists... being promoted breathlessly by BS artists.

[–] [email protected] 1 points 3 days ago (8 children)

LLMs have their flaws, but to claim they are wrong 70% of the time is just hate train bullshit.

Sounds like you base this info on models like GPT3. Have you tried any newer model?

[–] [email protected] 5 points 3 days ago

There are days when 70% error rate seems low-balling it, it's mostly a luck of the draw thing. And be it 10% or 90%, it's not really automation if a human has to be double-triple checking the output 100% of the time.

load more comments (7 replies)
load more comments (7 replies)
load more comments (12 replies)