this post was submitted on 13 May 2025
473 points (100.0% liked)

TechTakes

2003 readers
260 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (6 children)

So how do you tell apart AI contributions to open source from human ones?

[–] [email protected] 23 points 1 month ago* (last edited 1 month ago)

To get a bit meta for a minute, you don't really need to.

The first time a substantial contribution to a serious issue in an important FOSS project is made by an LLM with no conditionals, the pr people of the company that trained it are going to make absolutely sure everyone and their fairy godmother knows about it.

Until then it's probably ok to treat claims that chatbots can handle a significant bulk of non-boilerplate coding tasks in enterprise projects by themselves the same as claims of haunted houses; you don't really need to debunk every separate witness testimony, it's self evident that a world where there is an afterlife that also freely intertwines with daily reality would be notably and extensively different to the one we are currently living in.

[–] [email protected] 13 points 1 month ago

It's usually easy, just check if the code is nonsense

[–] [email protected] 13 points 1 month ago (1 children)

GitHub, for one, colors the icon red for AI contributions and green/purple for human ones.

[–] [email protected] 3 points 1 month ago (3 children)

Ah, right, so we're differentiating contributions made by humans with AI from some kind of pure AI contributions?

[–] [email protected] 22 points 1 month ago (1 children)

It's a joke, because rejected PRs show up as red on GitHub, open (pending) ones as green, and merged as purple, implying AI code will naturally get rejected.

[–] [email protected] 7 points 1 month ago

I appreciate you explaining it. My LLM wasn't working so I didn't understand the joke

[–] [email protected] 9 points 1 month ago

Jesus Howard Christ how did you manage to even open a browser to type this in

[–] [email protected] 4 points 1 month ago

yeah I just want to point this out

myself and a bunch of other posters gave you solid ways that we determine which PRs are LLM slop, but it was really hard to engage with those posts so instead you’re down here aggressively not getting a joke because you desperately need the people rejecting your shitty generated code to be wrong

with all due respect: go fuck yourself

[–] [email protected] 13 points 1 month ago

if it’s undisclosed, it’s obvious from the universally terrible quality of the code, which wastes volunteer reviewers’ time in a way that legitimate contributions almost never do. the “contributors” who lean on LLMs also can’t answer questions about the code they didn’t write or help steer the review process, so that’s a dead giveaway too.

[–] [email protected] 6 points 1 month ago
  1. see if the code runs
[–] [email protected] 3 points 1 month ago

for anyone that finds this thread in the future: "check if [email protected] contributed to this codebase" is an easy hack for this test