this post was submitted on 29 Jun 2025
22 points (100.0% liked)

TechTakes

2033 readers
290 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance...I guess.)

(page 3) 45 comments
sorted by: hot top controversial new old
[–] [email protected] 8 points 4 days ago (4 children)

Micro-sneer, inspired by this article on Swedish public service broadcasting

https://www.svt.se/nyheter/inrikes/anna-bjorklund-folk-uppfattar-barn-som-valdigt-jobbiga

The background is that the center-RIGHT gov of Sweden is gonna put up an investigation ("utredning") into why people aren't getting (the RIGHT kind of) kids. Nothing new there, simply the same culture war fretting already percolating in the anglosphere.

Finland already has an investigation ongoing, and the spokesperson there raises the point that one societal change that's happened in the last 25 years is... social media.

Wouldn't it be delicious if it could be proved that Facebook and Twitter and Tiktok are the reasons people don't get into relationships and have kids? Eat that, Elon!

[–] [email protected] 6 points 4 days ago (1 children)

@gerikson

Could be court shows / Maury Povich type shows / murder shows.

Watch enough of those and you're not going to want to have anything to do with humans.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 9 points 4 days ago (1 children)

Dr. Abeba Birhane got an AI True Believer^tm^ email recently, and shared it on Bluesky:

You want my opinion, I fully support acausal robot deicide, and think AI rights advocates can go fuck themselves.

[–] [email protected] 7 points 4 days ago (2 children)

Don't make me tap the sign:

no gods, no kings

load more comments (2 replies)
[–] [email protected] 6 points 4 days ago

As bad as things are on the Nazi Bar called ~~Twitter~~ X, you can still find some real gems

[–] [email protected] 16 points 4 days ago* (last edited 3 days ago) (10 children)

Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book "I am God" when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn't argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka's objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

https://xcancel.com/ohabryka/status/1939017731799687518#m

But what really made me want to drive a drill to the brain was the LW brigade rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K's predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can't see in any earnings report is the official party line now?

[–] [email protected] 9 points 4 days ago

It's kind of a shame to have to downgrade Gary to "not wrong, but kind of a dick" here. Especially because his sneer game as shown at the end there is actually not half bad.

load more comments (9 replies)
[–] [email protected] 12 points 4 days ago* (last edited 4 days ago) (1 children)

Aella popped up on doomscroll - https://youtu.be/r7WL6kaTJnw

E: oh man the comments are great

E2:

1:08:02 There's a lot of discussions among the rationalist community about the uneven distribution of IQ and its correlation with race. Why is this a topic that people fixate on if they're also convinced that this ultra intelligence an AGI that's like smarter than every human on the planet why are these marginal differences so important to people?

[–] [email protected] 12 points 4 days ago* (last edited 4 days ago) (1 children)

Highlights from the comments: @wjpmitchell3 writes,

Actual psychology researcher: the problem with IQ is A) We don't really know what it's measuring, B.) We don't really know how it's useful, C.) We don't really know how context-specific it is, D.) When people make arguments about IQ, it's often couched around prejudiced ulterior motives. No one actually cares about IQ; they care about what it's a proxy measure of and we don't have good evidence yet to say "This is a reliable and broadly-encompassing representation of intelligence." or whatever else, so if you are trying to use IQ differences to say that there are race differences in intelligence, you have no grounds. The best you can say is there are race differences in this proxy measure that we're still trying to understand. It's dangerous to use an unreliable and possibly inaccurate representation of a phenomena to make policy changes or inform decisions around race. The evidence threshold has to be extremely high because we're entering sensitive ethical spaces, which is something that rationalist don't do well in because their utilitarian calculus has difficulty capturing the intangibles.

@arnoldkotlyarevsky383 says,

Nothing wrong with being self educated but she comes across as being not as far along as you would want someone to be in their self-education before being given a platform.

@User123456767 observes,

You can kind of tell she grew up as a Calvinist because she still seems to think she's part of the elect she's just replaced an actual big G God with some sort of AI God.

@jaredsarnie3712 begins,

I feel like so much of what she says boils down to finding bizarre hypothetical situations where child sexual abuse is morally acceptable.

And from @Fruuuuuuuuuck:

Doomscroll gooner arc

load more comments (1 replies)
[–] [email protected] 17 points 5 days ago

Ed Zitron on bsky: https://bsky.app/profile/edzitron.com/post/3lsukqwhjvk26

Haven't seen a newsletter of mine hit the top 20 on Hackernews and then get flag banned faster, feels like it barely made it 20 minutes before it was descended upon by guys who would drink Sam Altman's bathwater

Also funny: the hn thread doesn't appear on their search.

https://news.ycombinator.com/item?id=44424456

[–] [email protected] 5 points 4 days ago

the model-based screening (which we've occasionally remarked on here before) has become enough of a thing that it's hitting news

[–] [email protected] 9 points 4 days ago* (last edited 4 days ago) (1 children)

Alright that's it: anime streaming needs to return to fansubbing (note: this link contains a skintight anime bosom so don't open it in front of your boss unless your boss is chill)

https://bsky.app/profile/pixeldoesthings.bsky.social/post/3lswcbtkwec2t

[–] [email protected] 7 points 4 days ago

Alright that’s it: anime streaming needs to return to fansubbing

Fansubs are openly doing it for the love of the anime, so chances are they'd avoid AI slop like the plague (though the CHUDs would be okay with ChatGPT subs if it meant avoiding The Woke^tm^)

(note: this link contains a skintight anime bosom so don’t open it in front of your boss unless your boss is chill)

Good thing I'm a fucking NEET, then

[–] [email protected] 12 points 4 days ago (1 children)

This titbit by Molly White about how whales have captured Polymarket's "dispute resolution" mechanism had me chuckling

https://hachyderm.io/@molly0xfff/114779592623569008

[–] [email protected] 6 points 4 days ago

So, Zelenskyy is goth?

[–] [email protected] 11 points 4 days ago

An interesting takedown of "superforecasting" from Ben Recht, a 3 part series on his substack where he accuses so called super forecasters of abusing scoring rewards over actually being precogs. First (and least technical) part linked below...

https://www.argmin.net/p/in-defense-of-defensive-forecasting

"The term Defensive Forecasting was coined by Vladimir Vovk, Akimichi Takemura, and Glenn Shafer in a brilliant 2005 paper, crystallizing a general view of decision making that dates back to Abraham Wald. Wald envisions decision making as a game. The two players are the decision maker and Nature, who are in a heated duel. The decision maker wants to choose actions that yield good outcomes no matter what the adversarial Nature chooses to do. Forecasting is a simplified version of this game, where the decisions made have no particular impact and the goal is simply to guess which move Nature will play. Importantly, the forecaster’s goal is not to never be wrong, but instead to be less wrong than everyone else.*

*Yes, I see what I did there."

[–] [email protected] 29 points 6 days ago* (last edited 6 days ago) (7 children)

I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with "uh huh" or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.

[–] [email protected] 9 points 5 days ago

@zbyte64 @BlueMonday1984
> the AI bot congratulated me on my gender transition

🫠

[–] [email protected] 8 points 5 days ago

@zbyte64 the technical term for those "uh huh"s is backchanneling, and I wonder if audio chatbot models have issues timing those correctly. Maybe it's a choice between not doing it at all, or doing it at incorrect times. Either sounds creepy. The pause before an AI (any AI) responds is uncanny. I bet getting backchanneling right would be even more of a nightmare.

Anyway, congrats on getting through that interview, and congrats on your transition to Docker, I guess?

[–] [email protected] 14 points 6 days ago

@zbyte64 this is so disrespectful to applicants.

[–] [email protected] 13 points 6 days ago (1 children)

Now I’m curious how a protected class question% speedrun of one of these interviews would look. Get the bot to ask you about your age, number of children, sexual orientation, etc

[–] [email protected] 11 points 6 days ago* (last edited 6 days ago) (1 children)

Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would "hallucinate". They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.

[–] [email protected] 9 points 6 days ago

@zbyte64 @antifuchs Something like "I have been working with Database systems from the time my youngest was born to roughly the time of my transition." and just wait for the clarifying questions.

[–] [email protected] 9 points 6 days ago

@zbyte64 @BlueMonday1984 at... at least the AI is an ally? 🤔

[–] [email protected] 4 points 5 days ago (3 children)

@zbyte64
I will go back to turning wrenches or slinging food before I spend one minute in an interview with an LLM ignorance factory.
@BlueMonday1984

[–] [email protected] 2 points 5 days ago (1 children)

@johntimaeus @zbyte64 @BlueMonday1984 Your choice. They made their choice. Judge not, lest ye be judged.

load more comments (2 replies)
[–] [email protected] 2 points 6 days ago

@cachondo WTF? AI is crazy sometimes, and this is coming from a pro-AI person.

[–] [email protected] 11 points 5 days ago
[–] [email protected] 18 points 6 days ago (3 children)

A bit of old news but that is still upsetting to me.

My favorite artist, Kazuma Kaneko, known for doing the demon designs in the Megami Tensei franchise, sold his soul to make an AI gacha game. While I was massively disappointed that he was going the AI route, the model was supposed to be trained solely on his own art and thus I didn't have any ethical issues with it.

Fast-forward to shortly after release and the game's AI model has been pumping out Elsa and Superman.

[–] [email protected] 16 points 6 days ago

the model was supposed to be trained solely on his own art

much simpler models are practically impossible to train without an existing model to build upon. With GenAI it's safe to assume that training that base model included large scale scraping without consent

[–] [email protected] 13 points 6 days ago (1 children)

It's a bird! It's a plane! It's... Evangelion Unit 1 with a Superman logo and a Diabolik mask.

[–] [email protected] 7 points 6 days ago (1 children)
[–] [email protected] 7 points 6 days ago* (last edited 6 days ago)

Good parallel, the hands are definitely strategically hidden to not look terrible.

[–] [email protected] 9 points 6 days ago

the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.

Personally, I consider training any slop-generator model to be unethical on principle. Gen-AI is built to abuse workers for corporate gain - any use or support of it is morally equivalent to being a scab.

Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.

Given plagiarism machines are designed to commit plagiarism (preferably with enough plausible deniability to claim fair use), I'm not shocked.

(Sidenote: This is just personal instinct, but I suspect fair use will be gutted as a consequence of the slop-nami.)

[–] [email protected] 12 points 5 days ago* (last edited 5 days ago) (1 children)

So two weeks ago I linked titotal's detailed breakdown of what is wrong with AI 2027's "model" (tldr; even accepting the line goes up premise of the whole thing, AI 2027's math was so bad that they made the line always asymptote to infinity in the near future regardless of inputs). Titotal went to pretty extreme lengths to meet the "charitability" norms of lesswrong, corresponding with one of the AI 2027 authors, carefully considering what they might have intended, responding to comments in detail and depth, and in general not simply mocking the entire exercise in intellectual masturbation and hype generation like it rightfully deserves.

But even with all that effort, someone still decided make an entire (long, obviously) post with a section dedicated to tone-policing titotal: https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027?open=false#%C2%A7the-headline-message-is-not-ideal (here is the lw link: https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts)

Oh, and looking back at the comments on titotal's post... his detailed elaboration of some pretty egregious errors in AI 2027 didn't really change anyone's mind, at most moving them back a year to 2028.

So, morale of the story, lesswrongers and rationalist are in fact not worth the effort to talk to and we are right to mock them. The numbers they claim to use are pulled out of their asses to fit vibes they already feel.

And my choice for most sneerable line out of all the comments:

https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models?commentId=XbPCQkgPmKYGJ4WTb

And I therefore am left wondering what less shoddy toy models I should be basing my life decisions on.

[–] [email protected] 13 points 5 days ago (2 children)

Oh, and looking back at the comments on titotal’s post… his detailed elaboration of some pretty egregious errors in AI 2027 didn’t really change anyone’s mind, at most moving them back a year to 2028.

Huh, what's this I have open in another browser tab:

The Great Disappointment in the Millerite movement was the reaction that followed Baptist preacher William Miller's proclamation that Jesus Christ would return to the Earth by 1844, which he called the Second Advent. His study of the Daniel 8 prophecy during the Second Great Awakening led him to conclude that Daniel's "cleansing of the sanctuary" was cleansing the world from sin when Christ would come, and he and many others prepared. When Jesus did not appear by October 22, 1844, Miller and his followers were disappointed.

[–] [email protected] 9 points 5 days ago (1 children)

Couple of years back there was a South Korean president that was captured by a cult, with cult reviewing all speeches and presumably influencing policies. No way that something like that could happen in the west, specifically no way it could get cooked in SV, pay no attention to it, nothing unusual happens here, this is not a place of honor,

[–] [email protected] 5 points 4 days ago

The Korean president should have been aware of biasses ghat would have stopped him falling for a cult.

[–] [email protected] 10 points 5 days ago (1 children)

Exactly. I would almost give the AI 2027 authors credit for committing to a hard date... except they already have a subtly hidden asterisk in the original AI 2027 noting some of the authors have longer timelines. And I've noticed lots of hand-wringing and but achkshuallies in their lesswrong comments about the difference between mode and median and mean dates and other excuses.

Like see this comment chain https://www.lesswrong.com/posts/5c5krDqGC5eEPDqZS/analyzing-a-critique-of-the-ai-2027-timeline-forecasts?commentId=2r8va889CXJkCsrqY :

My timelines move dup to median 2028 before we published AI 2027 actually, based on a variety of factors including iteratively updating our models. But it was too late to rewrite the whole thing to happen a year later, so we just published it anyway. I tweeted about this a while ago iirc.

...You got your AI 2027 reposted like a dozen times to /r/singularity, maybe many dozens of times total across Reddit. The fucking vice president has allegedly read your fiction project. And you couldn't be bothered to publish your best timeline?

So yeah, come 2028/2029, they already have a ready made set of excuse to backpedal and move back the doomsday prophecy.

[–] [email protected] 7 points 5 days ago

I call bullshit on Daniel K. That backtracking is so obviously ex-post-facto cover-your-ass woopsie-doopsie. Expect more of it as we get closer to whatever new "median" he has suddenly claimed. It's going to be fun to watch.

[–] [email protected] 11 points 6 days ago* (last edited 6 days ago) (1 children)

Ed's got another banger: https://www.wheresyoured.at/make-fun-of-them/

What's extra fun is that HN found it: https://news.ycombinator.com/item?id=44424456

There's at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don't have the time personally at the moment.

I will say that I'm shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.

[–] [email protected] 11 points 5 days ago (1 children)

Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it's dark alchemy. And while it's not that the rabbit hole doesn't go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

I'm not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can't do what they do so that you don't ask the incredibly obvious questions about why it's so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don't know what kinda excuse the business idiots and political bullshitters are going to come up with.

[–] [email protected] 7 points 4 days ago

You're absolutely right that the computer is still a black box to a lot of people, but throughout the personal computing era, there has at least been a pathway to mastery for the tools it offers. Furthermore, the touchscreen/smartphone era has roped in mechanisms of touch and proprioception that make the devices a more intimate, if deeply imperfect, extension of the self. Up until sometime late last decade, the Steve Jobs "bicycle of the mind" concept was still a driving force in the field.

I still don't think most people grasp what a subtle, but fundamental, break it is that these AI products demand you confront them as a wholly separate entity from yourself. The path to mastery, and the feedback loop that builds that path, is so obscure it may as well not exist. If you wish to retrain a model, you've got to invest huge amounts of time and resources, as well as what remains a specialized (and not well-specified, as Ed highlights) skillset... and since it's a probabilistic process, you're still not going to get consistent results.

I am more and more convinced that one of the damning core flaws of the current crop of AI technologies is that they are designed to incentivize use of centralized computing resources. Their designers are simply asking completely the wrong questions for the people the technologies are being imposed upon. But you can't say that someplace like HN, or even some parts of Bluesky, because so many people's salaries still depend on the rents from centralized computing.

[–] [email protected] 9 points 6 days ago

Found a piece which caught my attention: Resisting the Techno-Fascist Takeover: Are We Ready for Decomputing?

You want my personal opinion, the basic idea of "decomputing" that author Dan McQuillan is putting forward is likely gonna gain plenty of traction. The Trump administration more generally and DOGE more specifically have thoroughly undermined any notion of tech being an apolitical force, so arguing against the politics inherent to AI is gonna be an easier sell.

load more comments
view more: ‹ prev next ›