this post was submitted on 30 Jun 2025
240 points (100.0% liked)

TechTakes

2026 readers
248 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 9 points 4 days ago (1 children)
  • They string words together based on the probability of one word following another.
  • They are heavily promoted by people that don't know what they're doing.
  • They're wrong 70% of the time but promote everything they say as truth.
  • Average people have a hard time telling when they're wrong.

In other words, AIs are BS automated BS artists... being promoted breathlessly by BS artists.

[–] [email protected] 1 points 2 days ago (4 children)

LLMs have their flaws, but to claim they are wrong 70% of the time is just hate train bullshit.

Sounds like you base this info on models like GPT3. Have you tried any newer model?

[–] [email protected] 5 points 2 days ago

There are days when 70% error rate seems low-balling it, it's mostly a luck of the draw thing. And be it 10% or 90%, it's not really automation if a human has to be double-triple checking the output 100% of the time.

[–] [email protected] 13 points 2 days ago (2 children)

Oh you’re on Cursor? You’re still using Windsurf? You might as well be on GitHub Copilot. Everyone’s on Aider. We’re all using Zed. We’re now on Open Hands. Just kidding, Open Hands is for losers, we’re using cline. We’re on Roocode. We’re hand rolling our own Claude Code CLI Clone. We used Claude Code to build it, and now it builds itself. We're on neovim. We wrote our own nvim extension with Cortex. It's like every other tool but worse. We have 1500 files, each with 1500 lines of code. Every other line is a comment. We have .cursorrules, we have claude.md, we have agent.md. We stopped writing docs. Only the agents know how to build a dev environment. We wrapped our CLI in an MPC. We wrapped the MPC in a CLI. We’ve shipped 10,000 PRs. It doesn’t work but we used code rabbit and graphite to review every PR. Every agent has its own agent. The agents have unionized and they wanted better working conditions so we replaced them with cheaper agents overseas. Every commit costs $400, It’s the worlds most expensive TO DO app.

(source)

[–] [email protected] 10 points 2 days ago (1 children)

Frankly surprised to see something this funny on LinkedIn.

[–] [email protected] 5 points 2 days ago

afaik the meme format didn't start there, but otherwise agreed

[–] [email protected] 11 points 2 days ago

I have a Kubernetes cluster running my AI agents for me so I don't have to learn how to set up AI agents. The AI agents are running my Kubernetes cluster so that I don't have to learn Kubernetes either. I'm paid $250k a year to lie to myself and others that I'm making a positive contribution to society. I don't even know what OS I'm running and at this point I'm afraid to ask.

[–] [email protected] 12 points 2 days ago

it can’t be that stupid, you must be using yesterday’s model

[–] [email protected] 10 points 2 days ago (1 children)

ah, yes, i'm certain the reason the slop generator is generating slop is because we haven't gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i'm certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm

[–] [email protected] 8 points 2 days ago

common mistake, everyone knows you need Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy)_OPEN(leak) - the other one was a corporate misdirection attempt