this post was submitted on 11 May 2025
22 points (100.0% liked)

TechTakes

1999 readers
156 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 1 month ago (3 children)

if you saw that post making its rounds in the more susceptible parts of tech mastodon about how AI’s energy use isn’t that bad actually, here’s an excellent post tearing into it. predictably, the original post used a bunch of LWer tricks to replace numbers with vibes in an effort to minimize the damage being done by the slop machines currently being powered by such things as 35 illegal gas turbines, coal, and bespoke nuclear plants, with plans on the table to quickly renovate old nuclear plants to meet the energy demand. but sure, I’m certain that can be ignored because hey look over your shoulder is that AGI in a funny hat?

load more comments (3 replies)
[–] [email protected] 10 points 1 month ago (8 children)

Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:

“Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”

The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.

[–] [email protected] 10 points 1 month ago

Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do. Instead, it engages with philosophical questions about AI intelligently and thoughtfully.

lol

[–] [email protected] 9 points 1 month ago (1 children)

We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

load more comments (1 replies)
load more comments (6 replies)
[–] [email protected] 10 points 1 month ago* (last edited 1 month ago) (7 children)

LWer suggests people who believe in AI doom make more efforts to become (internet) famous. Apparently not bombing on Lex Fridman's snoozecast, like Yud did, is the baseline.

The community awards the post one measly net karma point, and the lone commenter scoffs at the idea of trying to convince the low-IQ masses to the cause. In their defense, Vanguardism has been tried before with some success.

https://www.lesswrong.com/posts/qcKcWEosghwXMLAx9/doomers-should-try-much-harder-to-get-famous

load more comments (7 replies)
[–] [email protected] 9 points 1 month ago (3 children)
[–] [email protected] 9 points 1 month ago

I will be watching with great interest. it’s going to be difficult to pull out of this one, but I figure he deserves as fair a swing at redemption as any recovered crypto gambler. but like with a problem gambler in recovery, it’s very important that the intent to do better is backed up by understanding, transparency, and action.

load more comments (2 replies)
[–] [email protected] 9 points 1 month ago (1 children)
[–] [email protected] 9 points 1 month ago

Personal rule of thumb: all autoplag is serious until proven satire.

[–] [email protected] 9 points 1 month ago (4 children)

Beff back at it again threatening his doxxer. Nitter link

[–] [email protected] 10 points 1 month ago (1 children)

Unrelated to this: man, there should be a parody account called “based beff jeck” which is just a guy trying to promote beck’s vast catalogue as the future of music. Also minus any mention of johnny depp.

[–] [email protected] 9 points 1 month ago

Also minus any mention of johnny depp.

Depp v. Heard was my generation's equivalent to the OJ Simpson trial, so chances are he'll end up conspicuous in his absence.

load more comments (3 replies)
[–] [email protected] 9 points 1 month ago* (last edited 1 month ago) (1 children)

New piece from Brian Merchant: De-democratizing AI, which is primarily about the GOP's attempt to ban regulations on AI, but also touches on the naked greed and lust for power at the core of the AI bubble.

EDIT: Also, that title's pretty clever

load more comments (1 replies)
[–] [email protected] 9 points 1 month ago (6 children)

Satya Nadella: "I'm an email typist."

Grand Inquisitor: "HE ADMITS IT!"

https://bsky.app/profile/reckless.bsky.social/post/3lpazsmm7js2s

load more comments (6 replies)
[–] [email protected] 9 points 1 month ago (2 children)

More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

load more comments (2 replies)
[–] [email protected] 8 points 1 month ago (2 children)

Local war profiteer goes on podcast to pitch an unaccountable fortress-state around active black site (what I assume is to do Little St James-type activities under the pretext of continued Yankee meddling)

Link to Xitter here (quoted within a delicious sneer to boot)

[–] [email protected] 9 points 1 month ago

Building a gilded capitalist megafortress within communist mortar range doesn't seem the wisest thing to do. But sure buy another big statue clearly signalling 'capitalists are horrible and shouldn't be trusted with money'

load more comments (1 replies)
load more comments
view more: ‹ prev next ›