this post was submitted on 08 Jun 2025
18 points (100.0% liked)

TechTakes

1973 readers
642 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 5 days ago (2 children)
[–] [email protected] 18 points 5 days ago* (last edited 5 days ago) (2 children)

what kind of semen retention scheme is this

[–] [email protected] 13 points 4 days ago

No Nut Neuravember

[–] [email protected] 10 points 5 days ago

I want my kids to be science experiments as there is no other way an ethics board would approve this kind.

[–] [email protected] 2 points 3 days ago

I'm pretty sure there are some other factors he's gonna need to sort out before having kids is even an actual question. For example, finding a woman who wants to have his kids and let him fuck with their infant brains.

Also given how we see the brain develop in cases of traumatic injury I would expect to see that neuroplasticity route around any kind of implant under most circumstances. Nerves aren't wires and you can't just plug 'em in and wait for a software patch.

[–] [email protected] 14 points 5 days ago* (last edited 5 days ago) (4 children)

Turns out some Silicon Valley folk are unhappy that a whole load of waymos got torched, fantasised that the cars could just gun down the protesters, and use genai video to bring their fantasies to some vague approximation of “life”

https://xcancel.com/venturetwins/status/1931929828732907882

The author, Justine Moore is an investment partner at a16z. May her future ventures be incendiary and uninsurable.

(via garbageday.email)

[–] [email protected] 9 points 5 days ago

I wonder if some of those chuds think those waymos might be conscious

[–] [email protected] 6 points 4 days ago

What is it with every fucking veo3 video being someone talking to the camera?! Artificial slop model tuned on humanmade slop.

[–] [email protected] 7 points 5 days ago

Seeing shit like this alongside the discussions of the use of image recognition and automatic targeting in the recent Ukrainian drone attacks on Russian bombers is not great.

Also something something sanitized violence something something. These people love to fantasize about the thrill of defending themselves and their ideology with physical force but even in their propaganda are (rightly) disgusted and terrified by the consequences that such violence has on actual people.9

[–] [email protected] 6 points 5 days ago* (last edited 4 days ago)

At first I was like:

But then I was like:

[–] [email protected] 13 points 5 days ago* (last edited 5 days ago)

In other news, reports of AI images turning everything yellow have made it to Know Your Meme.

[–] [email protected] 13 points 5 days ago* (last edited 5 days ago) (2 children)

"Cursor YOLO deleted everything in my computer":

Hi everyone - as a previous context I’m an AI Program Manager at J&J and have been using Cursor for personal projects since March.

Yesterday I was migrating some of my back-end configuration from Express.js to Next.js and Cursor bugged hard after the migration - it tried to delete some old files, didn’t work at the first time and it decided to end up deleting everything on my computer, including itself. I had to use EaseUS to try to recover the data, but didn’t work very well also. Lucky I always have everything on my Google Drive and Github, but it still scared the hell out of me.

Now I’m allergic to YOLO mode and won’t try it anytime soon again. Does anyone had any issue similar than this or am I the first one to have everything deleted by AI?

The response:

Hi, this happens quite rarely but some users do report it occasionally.

My T-shirt is raising questions already answered, etc.

(via)

[–] [email protected] 11 points 5 days ago (4 children)

I looked this up because I thought it was a nickname for something, but no, Cursor seems to have a setting that’s officially called YOLO mode. As per their docs:

With Yolo Mode, the agent can auto-run terminal commands

So this guy explicitly ticked the box that allowed the bullshit generator to execute arbitrary code on his machine. Why would you ever use that? What's someone’s rationale for enabling a setting like that? They even name it YOLO mode. It’s like the fucking red button in the movie that says, don’t push the red button, and promptfans are still like, yes, that sounds like a good idea!

[–] [email protected] 3 points 4 days ago* (last edited 4 days ago)

There is an implicit claim in the red button that it was worth including.

It is like Google’s AI overviews. There can not be a sufficient disclaimer because the overview being on the top of Google search implies a level of usefulness which it does not meet, not even in the “evil plan to make more money briefly” way.

Edit: my analogy to AI disclaimers is using “this device uses nuclei known to the state of California to…” in place of “drop and run”.

[–] [email protected] 11 points 5 days ago (3 children)

Can you imagine selling something like a firewall appliance with a setting called "Yolo Mode", or even a tax software or a photo organizer or anything that handles any data, even if only of middling importance, and then still expect to be taken seriously at all?

[–] [email protected] 11 points 5 days ago (1 children)

Setting my oven to YOLO Mode and dying in a fire 7 seconds later

[–] [email protected] 9 points 5 days ago (1 children)

I set my car to YOLO more by pointing the vehicle roughly the direction I wish to travel and then dropping a brick on the accelerator.

[–] [email protected] 8 points 4 days ago* (last edited 4 days ago) (1 children)

We already have this, it's just a Tesla with "FSD" on

[–] [email protected] 2 points 3 days ago

I thought FSD only works if there are kids around for the car to aim at.

[–] [email protected] 2 points 4 days ago* (last edited 4 days ago)

Yolo charging mode on a phone, disable the battery overheating sensor and the current limiter.

I suspect that they added yolo mode because without it this thing is too useless.

[–] [email protected] 5 points 5 days ago

My tax prep software definitely has a mode called "give me Deus Ex"

[–] [email protected] 10 points 5 days ago

Well, they can't fully outsource thinking to the autocomplete if they get asked whether some actions are okay.

[–] [email protected] 5 points 5 days ago

deserved tbh

[–] [email protected] 10 points 5 days ago (1 children)

I was reading a post by someone trying to make shell scripts with an llm, and at one point the system suggested making a directory called ~ (which is a shorthand for your home directory in a bunch of unix-alikes). When the user pointed out this was bad, the llm recommended remediation using rm -r ~ which would of course delete all your stuff.

So, yeah, don’t let the approximately-correct machine do things by itself, when a single character substitution can destroy all your stuff.

And JFC, being surprised that something called “YOLO” might be bad? What were people expecting? --all-the-red-flags

[–] [email protected] 11 points 5 days ago

The basilisk we have at home

[–] [email protected] 8 points 5 days ago

And back on the subject of builder.ai, there’s a suggestion that it might not have been A Guy Instead, and the whole 700 human engineers thing was a misunderstanding.

https://blog.pragmaticengineer.com/builder-ai-did-not-fake-ai/

I’m not wholly sure I buy the argument, which is roughly

  • people from the company are worried that this sort of new will affect their future careers.
  • humans in the loop would have exhibited far too high latency, and getting an llm to do it would have been much faster and easier than having humans try to fake it at speed and scale.
  • there were over a thousand “external contractors” who were writing loads of code, but that’s not the same as being Guys Instead.

I guess the question then is: if they did have a good genai tool for software dev… where is it? Why wasn’t Microsoft interested in it?

[–] [email protected] 13 points 6 days ago (1 children)
[–] [email protected] 12 points 5 days ago

At the same time, we have a Heartbreaking: The Worst Person You Know etc in the article itself:

What does a human slowly going insane look like to a corporation?” Mr. Yudkowsky asked in an interview. “It looks like an additional monthly user.”

[–] [email protected] 28 points 1 week ago (4 children)

Bringing over aio's comment from the end of last week's stubsack:

This week the WikiMedia Foundation tried to gather support for adding LLM summaries to the top of every Wikipedia article. The proposal was overwhelmingly rejected by the community, but the WMF hasn't gotten the message, saying that the project has been "paused". It sounds like they plan to push it through regardless.

Way down in the linked wall o' text, there's a comment by "Chaotic Enby" that struck me:

Another summary I just checked, which caused me a lot more worries than simple inaccuracies: Cambrian. The last sentence of that summary is "The Cambrian ended with creatures like myriapods and arachnids starting to live on land, along with early plants.", which already sounds weird: we don't have any fossils of land arthropods in the Cambrian, and, while there has been a hypothesis that myriapods might have emerged in the Late Cambrian, I haven't heard anything similar being proposed about arachnids. But that's not the worrying part.

No, the issue is that nowhere in the entire Cambrian article are myriapods or arachnids mentioned at all. Only one sentence in the entire article relates to that hypothesis: "Molecular clock estimates have also led some authors to suggest that arthropods colonised land during the Cambrian, but again the earliest physical evidence of this is during the following Ordovician". This might indicate that the model is relying on its own internal knowledge, and not just on the contents of the article itself, to generate an "AI overview" of the topic instead.

Further down the thread, there's a comment by "Gnomingstuff" that looks worth saving:

There was an 8-person community feedback study done before this (a UI/UX text using the original Dopamine summary), and the results are depressing as hell. The reason this was being pushed to prod sure seems to be the cheerleading coming from 7 out of those 8 people: "Humans can lie but AI is unbiased," "I trust AI 100%," etc.

Perhaps the most depressing is this quote -- "This also suggests that people who are technically and linguistically hyper-literate like most of our editors, internet pundits, and WMF staff will like the feature the least. The feature isn't really "for" them" -- since it seems very much like an invitation to ignore all of us, and to dismiss any negative media coverage that may ensue (the demeaning "internet pundits").

Sorry for all the bricks of text here, this is just so astonishingly awful on all levels and everything that I find seems to be worse than the last.

Another comment by "CMD" evaluates the summary of the dopamine article mentioned there:

The first sentence is in the article. However, the second sentence mentions "emotion", a word that while in a couple of reference titles isn't in the article at all. The third sentence says "creating a sense of pleasure", but the article says "In popular culture and media, dopamine is often portrayed as the main chemical of pleasure, but the current opinion in pharmacology is that dopamine instead confers motivational salience", a contradiction. "This neurotransmitter also helps us focus and stay motivated by influencing our behavior and thoughts". Where is this even from? Focus isn't mentioned in the article at all, nor is influencing thoughts. As for the final sentence, depression is mentioned a single time in the article in what is almost an extended aside, and any summary would surely have picked some of the examples of disorders prominent enough to be actually in the lead.

So that's one of five sentences supported by the article. Perhaps the AI is hallucinating, or perhaps it's drawing from other sources like any widespread llm. What it definitely doesn't seem to be doing is taking existing article text and simplifying it.

[–] [email protected] 24 points 1 week ago

but the WMF hasn't gotten the message, saying that the project has been "paused". It sounds like they plan to push it through regardless.

Classic “Yes” / “ask me later”. You hate to see it.

[–] [email protected] 21 points 1 week ago (3 children)

The thing that galls me here even more than other slop is that there isn't even some kind of horrible capitalist logic underneath it. Like, what value is this supposed to create? Replacing the leads written by actual editors, who work for free? You already have free labor doing a better job than this, why would you compromise the product for the opportunity to spend money on compute for these LLM not-even-actually-summaries? Pure brainrot.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 25 points 1 week ago (2 children)

So, I've been spending too much time on subreddits with heavy promptfondler presence, such as /r/singularity, and the reddit algorithm keeps recommending me subreddit with even more unhinged LLM hype. One annoying trend I've noted is that people constantly conflate LLM-hybrid approaches, such as AlphaGeometry or AlphaEvolve (or even approaches that don't involve LLMs at all, such as AlphaFold) with LLMs themselves. From their they act like of course LLMs can [insert things LLMs can't do: invent drugs, optimize networks, reliably solve geometry exercise, etc.].

Like I saw multiple instances of commenters questioning/mocking/criticizing the recent Apple paper using AlphaGeometry as a counter example. AlphaGeometry can actually solve most of the problems without an LLM at all, the LLM component replaces a set of heuristics that make suggestions on proof approaches, the majority of the proof work is done by a symbolic AI working with a rigid formal proof system.

I don't really have anywhere I'm going with this, just something I noted that I don't want to waste the energy repeatedly re-explaining on reddit, so I'm letting a primal scream out here to get it out of my system.

[–] [email protected] 18 points 1 week ago

Relatedly, the gathering of (useful, actually works in real life, can be used to make products that turn a profit or that people actually want, and sometimes even all of the above at the same time) computer vision and machine learning and LLMs under the umbrella of “AI” is something I find particularly galling.

The eventual collapse of the AI bubble and the subsequent second AI winter is going to take a lot of useful technology with it that had the misfortune to be standing a bit too close to LLMs.

load more comments (1 replies)
[–] [email protected] 19 points 1 week ago* (last edited 1 week ago) (7 children)

Did you know there’s a new fork of xorg, called x11libre? I didn’t! I guess not everyone is happy with wayland, so this seems like a reasonable

It's explicitly free of any "DEI" or similar discriminatory policies.. [snip]

Together we'll make X great again!

Oh dear. Project members are of course being entirely normal about the whole thing.

Metux, one of the founding contributors, is Enrico Weigelt, who has reasonable opinions like everyone except the nazis were the real nazis in WW2, and also had an anti vax (and possibly eugenicist) rant on the linux kernel mailing list, as you do.

In sure it’ll be fine though. He’s a great coder.

(links were unashamedly pillaged from this mastodon thread: https://nondeterministic.computer/@mjg59/114664107545048173)

[–] [email protected] 19 points 1 week ago (6 children)

Ok, maybe someone can help me here figure something out.

I've wondered for a long time about a strange adjacency which I sometimes observe between what I call (due to lack of a better term) "unix conservativism" and fascism. It's the strange phenomenon where ideas about "classic" and "pure" unix systems coincide with the worst politics. For example the "suckless" stuff. Or the ramblings of people like ESR. Criticism of systemd is sometimes infused with it (yes, there is plenty of valid criticism as well. But there's this other kind of criticism I've often seen, which is icky and weirdly personal). And I've also seen traces of this in discussions of programming languages newer than C, especially when topics like memory safety come up.

This is distinguished from retro computing and nostalgia and such, those are unrelated. If someone e.g. just likes old unix stuff, that's not what I mean.

You may already notice, I struggle a bit to come up with a clear definition and whether there really is a connection or just a loose set of examples that are not part of a definable set. So, is there really something there or am I seeing a connection that doesn't exist?

I've also so far not figured out what might create the connection. Ideas I have come up with are: appeal to times that are gone (going back to an idealized computing past that never existed), elitism (computers must not become user friendly), ideas of purity (an imaginary pure "unix philosophy").

Anyway, now with this new xlibre project, there's another one that fits into it...

load more comments (6 replies)
load more comments (6 replies)
[–] [email protected] 18 points 1 week ago (3 children)

New Zitron dropped, and, fuck, I feel this one in my bones.

load more comments (3 replies)
load more comments
view more: next ›