this post was submitted on 13 May 2025
473 points (100.0% liked)

TechTakes

1996 readers
107 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 133 points 1 month ago (3 children)

As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

This is the most entertaining thing I've read this month.

[–] [email protected] 67 points 1 month ago

I tried asking some chimps to see if the macaques had written a New York Times best seller, if not MacBeth, yet somehow Random house wouldn't publish my work

[–] [email protected] 25 points 1 month ago (4 children)

@spankmonkey @dgerard

"I can't sing or play any instruments, and I haven't written any songs, but you *have* to let me join your band"

load more comments (4 replies)
load more comments (1 replies)
[–] [email protected] 105 points 1 month ago (2 children)

Man trust me you don't want them. I've seen people submit ChatGPT generated code and even generated the PR comment with ChatGPT. Horrendous shit.

[–] [email protected] 58 points 1 month ago* (last edited 1 month ago) (1 children)

The maintainers of curl recently announced any bug reports generated by AI need a human to actually prove it's real. They cited a deluge of reports generated by AI that claim to have found bugs in functions and libraries which don't even exist in the codebase.

[–] [email protected] 15 points 1 month ago

you may find, on actually going through the linked post/video, that this is in fact mentioned in there already

[–] [email protected] 23 points 1 month ago (1 children)

Today the CISO of the company I work for suggested that we should get qodo.ai because it would "... help the developers improve code quality."

I wish I was making this up.

[–] [email protected] 28 points 1 month ago (1 children)

My boss is obsessed with Claude and ChatGPT, and loves to micromanage. Typically, if there's an issue with what a client is requesting, I'll approach him with:

  1. What the issue is
  2. At least two possible solutions or alternatives we can offer

He will then, almost always, ask if I've checked with the AI. I'll say no. He'll then send me chunks of unusable code that the AI has spat out, which almost always perfectly illuminate the first point I just explained to him.

It's getting very boring dealing with the roboloving freaks.

load more comments (1 replies)
[–] [email protected] 55 points 1 month ago (4 children)

Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?

None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.

[–] [email protected] 27 points 1 month ago (1 children)

If the people addicted to AI could read and interpret a simple sentence, they'd be very angry with your comment

[–] [email protected] 16 points 1 month ago* (last edited 1 month ago)

Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything "already debunked strawmen".

[–] [email protected] 23 points 1 month ago (1 children)

It's been almost six decades of this, actually; we all know what this link will be. Longer if you're like me and don't draw a distinction between AI, cybernetics, and robotics.

load more comments (1 replies)
[–] [email protected] 18 points 1 month ago (2 children)

Where is the good AI art?

Right here:

That’s about all the good AI art I know.

There are plenty of uses for AI, they are just all evil

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 54 points 1 month ago (35 children)

The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.

That should be a big warning sign that the next generation of developers are not going to be very good. If they're waist deep in AI slop, they're only going to learn how to deal with AI slop.

As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).

What I'm feeling after reading that must be what artists feel like when AI slop proponents tell them "we're making art accessible".

[–] [email protected] 27 points 1 month ago

Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform's assistant chat. The SQL it generated was invalid so the dev simply told it "fix" and it made the query valid, much to everyone's amusement.

The actual column names did not reflect the output they were mapped to, there's no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right

[–] [email protected] 17 points 1 month ago

I can make slop code without ai.

[–] [email protected] 17 points 1 month ago

In so many ways, LLMs are just the tip of the iceberg of bad ideology in software development. There have always been people that come into the field and develop heinously bad habits. Whether it's the "this is just my job, the only thing I think about outside work is my family" types or the juniors who only know how to copy paste snippets from web forums.

And look, I get it. I don't think 60-80 hour weeks are required to be successful. But I'm talking about people who are actively hostile to their own career paths, who seem to hate programming except that it pays good and let's them raise families. Hot take: that sucks. People selfishly obsessed with their own lineage and utterly incurious about the world or the thing they spend 8 hours a day doing suck, and they're bad for society.

The juniors are less of a drain on civilization because they at least can learn to do better. Or they used to could, because as another reply mentioned, there's no path from LLM slop to being a good developer. Not without the intervention of a more experienced dev to tell them what's wrong with the LLM output.

It takes all the joy out of the job too, something they've been working on for years. What makes this work interesting is understanding people's problems, working out the best way to model them, and building towards solutions. What they want the job to be is a slop factory: same as the dream of every rich asshole who thinks having half an idea is the same as working for years to fully realize an idea in all it's complexity and wonder.

They never have any respect for the work that takes because they've never done any work. And the next generation of implementers are being taught that there are no new ideas. You just ask the oracle to give you the answer.

load more comments (32 replies)
[–] [email protected] 49 points 1 month ago (6 children)

Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.

They won't understand how they did so much with so little. You're all gourmet chefs in a future of McDonalds.

[–] [email protected] 50 points 1 month ago (4 children)

Nah, we're plumbers in an age where everyone has decided to DIY their septic system.

Please, by all means, keep it up.

[–] [email protected] 15 points 1 month ago

This is dead on! 99% of the fucking job is digital plumbing so the whole thing doesn't blow the up when (a) there's a slight deviation from the "ideal" data you were expecting, or (b) the stakeholders wanna make changes at the last minute to a part of the app that seems benign but is actually the crumbling bedrock this entire legacy monstrosity was built upon. Both scenarios are equally likely.

load more comments (3 replies)
[–] [email protected] 15 points 1 month ago (2 children)

Hot take, people will look back on anyone who currently codes, as we look back on the NASA programmers who got the equipment and people to the moon.

I doubt it'll be anything that good for them. By my guess, those who currently code are at risk of suffering some guilt-by-association problems, as the AI bubble paints them as AI bros by proxy.

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 46 points 1 month ago

I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.

[–] [email protected] 43 points 1 month ago

Baldur Bjarnason's given his thoughts on Bluesky:

My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking

[–] [email protected] 40 points 1 month ago (7 children)

No the fuck it's not

I'm a pretty big proponent of FOSS AI, but none of the models I've ever used are good enough to work without a human treating it like a tool to automate small tasks. In my workflow there is no difference between LLMs and fucking grep for me.

People who think AI codes well are shit at their job

[–] [email protected] 28 points 1 month ago (43 children)

In my workflow there is no difference between LLMs and fucking grep for me.

Well grep doesn't hallucinate things that are not actually in the logs I'm grepping so I think I'll stick to grep.

(Or ripgrep rather)

[–] [email protected] 17 points 1 month ago (10 children)

With grep it’s me who hallucinates that I can right good regex :,)

load more comments (10 replies)
load more comments (42 replies)
load more comments (6 replies)
[–] [email protected] 38 points 1 month ago (22 children)

The headlines said that 30% of code at Microsoft was AI now! Huge if true!

Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.

load more comments (22 replies)
[–] [email protected] 32 points 1 month ago (1 children)

please don't encourage them, someones got to review that shit!

load more comments (1 replies)
[–] [email protected] 26 points 1 month ago (3 children)

The only people impressed by AI code are people who have the level to be impressed by AI code. Same for AI playing chess.

[–] [email protected] 16 points 1 month ago (2 children)

I'm very confused by this comparison, AI is much, much better at chess than people.

load more comments (2 replies)
load more comments (2 replies)
[–] [email protected] 25 points 1 month ago (5 children)

Good hustle Gerard, great job starting this chudstorm. I’m having a great time

[–] [email protected] 19 points 1 month ago (2 children)

the prompt-related pivots really do bring all the chodes to the yard

and they're definitely like "mine's better than yours"

[–] [email protected] 16 points 1 month ago* (last edited 1 month ago) (1 children)

The latest twist I'm seeing isn't blaming your prompting (although they're still eager to do that), it's blaming your choice of LLM.

"Oh, you're using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren't trying the right models, so allow me to educate you with all my prompt fondling experience. You're trying to make some general point? Clearly you just need to try another model."

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 17 points 1 month ago (1 children)

this post has also broken containment in the wider world, the video's got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 24 points 1 month ago

It's so bad at coding... Like, it's not even funny.

[–] [email protected] 23 points 1 month ago

Damn, this is powerful.

If AI code was great, and empowered non-programmers, then open source projects should have already committed hundreds of thousands of updates. We should have new software releases daily.

[–] [email protected] 22 points 1 month ago (5 children)

Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can't bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM's will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.

[–] [email protected] 15 points 1 month ago* (last edited 1 month ago)

You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)

load more comments (4 replies)
[–] [email protected] 21 points 1 month ago (6 children)

This broke containment at the Red Site: https://lobste.rs/s/gkpmli/if_ai_is_so_good_at_coding_where_are_open

Reader discretion is advised, lobste.rs is home to its fair share of promptfondlers.

[–] [email protected] 17 points 1 month ago (1 children)

promptfondlers

We finally have a slur for ai bros 🥹

load more comments (1 replies)
load more comments (5 replies)
[–] [email protected] 20 points 1 month ago* (last edited 1 month ago)

You can hardly get online these days without hearing some AI booster talk about how AI coding is going to replace human programmers.

Mostly said by tech bros and startups.

That should really tell you everything you need to know.

[–] [email protected] 19 points 1 month ago* (last edited 1 month ago) (9 children)

Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.

I don't think it's entirely useless as it is, it's just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It's artificial because it is being developed to artificially fulfill prompts, which they do succeed at.

When people do develop true intelligence-on-demand, you'll know because you will lose your job, not simply have another tool at your disposal. The prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.

load more comments (9 replies)
[–] [email protected] 18 points 1 month ago (5 children)

If LangChain was written via VibeCoding then that would explain a lot.

load more comments (5 replies)
[–] [email protected] 17 points 1 month ago (6 children)

why is no-one demanding to know why the robot is so sexay

load more comments (6 replies)
load more comments
view more: next ›