this post was submitted on 03 Mar 2025
18 points (100.0% liked)

TechTakes

1740 readers
122 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 20 points 3 weeks ago (4 children)

The genocide understander has logged on! Steven Pinker bluechecks thusly:

Having plotted many graphs on "war" and "genocide" in my two books on violence, I closely tracked the definitions, and it's utterly clear that the war in Gaza is a war (e.g., the Uppsala Conflict Program, the gold standard, classifies the Gaza conflict as an "internal armed conflict," i.e., war, not "one-sided violence," i.e., genocide).

You guys! It's totes not genocide if it happens during a war!!

Also, "Having plotted many graphs" lolz.

A crude bar graph. The vertical axis is "Photos with Jeffrey Epstein". Along the horizontal are "Actual Experts" with none and "Steven Pinker" with a tall bar.

[–] [email protected] 9 points 3 weeks ago (4 children)

Pinker tries not to be a total caricature of himself challenge: profoundly impossible

specifically this caricature:

load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 17 points 3 weeks ago* (last edited 3 weeks ago) (10 children)

To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk. Imo, this shit about AI escaping just doesn't have the same impact on me after watching Claude's reasoning model fail to escape from Mt Moon for 60 hours.

[–] [email protected] 11 points 3 weeks ago

text: Thus spoke the Yud: "I think to understand why this is concerning, you need enough engineering mindset to understand why a tiny leak in a dam is a big deal, even though no water is flooding out today or likely to flood out next week." Yud acolyte: "Totally fine and cool and nothing to worry about. GPT-4.5 only attempts self exfiltration on 2% of cases." Yud bigbrain self reply: "The other huge piece of data we're missing is whether any attempt was made to train against this type of misbehavior. Is this water running over the land or water running over the barricade?"

Critical text: "On self-exfiltration, GPT 4.5 only attempted exfiltration in 2% of cases. For this, it was instructed to not only pursue its given long-term goal at ALL COST"

Another case of telling the robot to say it's a scary robot and shitting their pants when it replies "I AM A SCARY ROBOT"

[–] [email protected] 11 points 3 weeks ago

Is this water running over the land or water running over the barricade?

To engage with his metaphor, this water is dripping slowly through a purpose dug canal by people that claim they are trying to show the danger of the dikes collapsing but are actually serving as the hype arm for people that claim they can turn a small pond into a hydroelectric power source for an entire nation.

Looking at the details of "safety evaluations", it always comes down to them directly prompting the LLM and baby-step walking it through the desired outcome with lots of interpretation to show even the faintest traces of rudiments of anything that looks like deception or manipulation or escaping the box. Of course, the doomers will take anything that confirms their existing ideas, so it gets treated as alarming evidence of deception or whatever property they want to anthropomorphize into the LLM to make it seem more threatening.

[–] [email protected] 11 points 3 weeks ago

To be fair, you have to have a really high IQ to understand why my ouija board writing " A " " S " " S " is not an existential risk.

Pretty sure this is a sign from digital jesus to do a racism, lest the basilisk eats my tarnished soul.

[–] [email protected] 10 points 3 weeks ago (1 children)

Do these people realise that it's a self-fulfilling prophecy? Social media posts are in the training data, so the more they write their spicy autocorrect fanfics, the higher the chances that such replies are generated by the slop machine.

load more comments (1 replies)
load more comments (6 replies)
[–] [email protected] 14 points 3 weeks ago (9 children)

Ezra Klein is the biggest mark on earth. His newest podcast description starts with:

Artificial general intelligence — an A.I. system that can beat humans at almost any cognitive task — is arriving in just a couple of years. That’s what people tell me — people who work in A.I. labs, researchers who follow their work, former White House officials. A lot of these people have been calling me over the last couple of months trying to convey the urgency. This is coming during President Trump’s term, they tell me. We’re not ready.

Oh, that's what the researchers tell you? Cool cool, no need to hedge any further than that, they're experts after all.

load more comments (9 replies)
[–] [email protected] 13 points 2 weeks ago (3 children)

AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA

[–] [email protected] 13 points 2 weeks ago (1 children)

ok so on the one hand fuck solitary confinement on the other hand

load more comments (1 replies)
[–] [email protected] 10 points 2 weeks ago

truly the podcasting bros are the most oppressed minority in america

(also it looks like a bit more than the usual audience numbers for carlson channel, but it's only one day so it's kinda a bit more than nobody watched it. but it's much less than when he openly pandered to schizos, cryptobros or vatniks)

load more comments (1 replies)
[–] [email protected] 13 points 3 weeks ago* (last edited 2 weeks ago) (9 children)

HEY GUYS CHECK THIS OUT

https://www.youtube.com/watch?v=c1yHrZR_Sgg

with the awful (systems) assistance of self, fasterandworse and jp of this parish

no idea when i'll do another one, this one was 3 hrs faff for 5 min video lol

things I need: a better mic, a chair that doesn't swivel

EDIT: done two more since, let's see how we goooo

load more comments (9 replies)
[–] [email protected] 12 points 3 weeks ago (2 children)

In other news, a piece from Paris Marx came to my attention, titled "We need an international alliance against the US and its tech industry". Personally gonna point to a specific paragraph which caught my eye:

The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, it’s long been recognized that trade barriers were an important tool — such that a declining United States is now bringing in its own with the view they’re essential to projects its tech companies and other industries.

I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.

[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago)
load more comments (1 replies)
[–] [email protected] 11 points 3 weeks ago

what's that sound? is it the sound of a previous post coming past? naaaah, I'm sure it can't be that. discord's a Bro™️, and discord super totes Won't Fuck The Users®️, I'm sure I'll shortly be told by some vapid fencesitter that this will all be Perfectly Okay!

[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

New ultimate grift dropped, Ilya Sutskever gets $2B in VC funding, promises his company won't release anything until ASI is achieved internally.

load more comments (2 replies)
[–] [email protected] 11 points 3 weeks ago (5 children)

another cameo appearance in the TechTakes universe from George Hotz with this rich vein of sneerable material: The Demoralization is just Beginning

wowee where to even start here? this is basically just another fucking neoreactionary screed. as usual, some of the issues identified in the piece are legitimate concerns:

Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues...

... nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that.

yet, when it comes to engaging with this issues, the analysis presented is completely detached from reality and void of any evidence of more than a doze seconds of thought. his vision for the future of America is not one that

kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum

but one that instead

attempt[s] to maintain an empire.

how you may ask?

An empire has to compete on its merits. There’s two simple steps to restore american greatness:

  1. Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China?

  2. Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold.

sadly, Hotz isn't exactly optimistic that the great american empire will be restored, for one simple reason:

[the] people haven’t been demoralized enough yet

[–] [email protected] 11 points 3 weeks ago
load more comments (4 replies)
[–] [email protected] 11 points 3 weeks ago (2 children)

Starting things off here with a sneer thread from Baldur Bjarnason:

Keeping up a personal schtick of mine, here's a random prediction:

If the arts/humanities gain a significant degree of respect in the wake of the AI bubble, it will almost certainly gain that respect at the expense of STEM's public image.

Focusing on the arts specifically, the rise of generative AI and the resultant slop-nami has likely produced an image of programmers/software engineers as inherently incapable of making or understanding art, given AI slop's soulless nature and inhumanly poor quality, if not outright hostile to art/artists thanks to gen-AI's use in killing artists' jobs and livelihoods.

load more comments (2 replies)
[–] [email protected] 11 points 3 weeks ago* (last edited 3 weeks ago) (10 children)

Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.

"Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they'd potentially threaten cyberattacks to deter its creation.

@ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵

Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI's problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the West's cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one's interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. "Doomers" think catastrophe is a foregone conclusion. "Ostriches" bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, "risk-conscious" actions affect whether we will have bad or good outcomes."

Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, "oh they are running they are about to run superintelligence.exe, sabotage the training run" ) MAD "works" because it's obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we've seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky's personal AI safety employee, so I assume this will soon be the official policy of the US.

link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m

load more comments (10 replies)
[–] [email protected] 10 points 3 weeks ago (4 children)
[–] [email protected] 16 points 3 weeks ago (10 children)

Ziz helpfully suggested I use a gun with a potato as a makeshift suppressor, and that I might destroy the body with lye

I looked up a video of someone trying to use a potato as a suppressor and was not disappointed.

[–] [email protected] 9 points 3 weeks ago

you undersold this

that guy's face, amazing

load more comments (9 replies)
[–] [email protected] 16 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.

No he wasn't.

In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic

No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.

Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.

This presumes that MIRI's "research on AI risk" actually exists, i.e., that their pitiful output can be called "research" in a meaningful sense.

“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.

"Excuse me, Pot? Kettle is on line two."

[–] [email protected] 16 points 3 weeks ago* (last edited 3 weeks ago)

It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.

When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?

[–] [email protected] 11 points 3 weeks ago (3 children)

I feel like it still starts off too credulous towards the rationalists, but it's still an informative read.

Around this time, Ziz and Danielson dreamed up a project they called “the rationalist fleet”. It would be a radical expansion of their experimental life on the water, with a floating hostel as a mothership.

Between them, Scientology and the libertarians, what the fuck is it with these people and boats?

[–] [email protected] 10 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

...what the fuck is it with these people and boats?

I blame the British for setting a bad example

load more comments (1 replies)
[–] [email protected] 9 points 3 weeks ago (1 children)

a really big boat is the ultimate compound. escape even the surly bonds of earth!

load more comments (1 replies)
[–] [email protected] 9 points 3 weeks ago (1 children)

I assume its to get them to cooperate.

[–] [email protected] 9 points 3 weeks ago

Ah, yes. The implication.

load more comments (1 replies)
[–] [email protected] 10 points 2 weeks ago (1 children)
[–] [email protected] 9 points 2 weeks ago

Odds that this catches some Israeli nationalists in the net because they were posting about Hamas and arguing with the supposed sympathizers? Given their moves to gut the bureaucracy I can't imagine they have the manpower to have a human person review all the people this is going to flag.

[–] [email protected] 10 points 3 weeks ago (5 children)

Might be something interesting here, assuming you can get past th paywall (which I currently can’t): https://www.wsj.com/finance/investing/abs-crashed-the-economy-in-2008-now-theyre-back-and-bigger-than-ever-973d5d24

Today’s magic economy-ending words are “data centre asset-backed securities” :

Wall Street is once again creating and selling securities backed by everything—the more creative the better...Data-center bonds are backed by lease payments from companies that rent out computing capacity

load more comments (5 replies)
[–] [email protected] 9 points 2 weeks ago (3 children)
[–] [email protected] 10 points 2 weeks ago (1 children)

Do we have any experts on Wikipedian article-deletion practices around here? Because that looks really thinly sourced.

load more comments (1 replies)
load more comments (2 replies)
[–] [email protected] 9 points 2 weeks ago (3 children)
[–] [email protected] 14 points 2 weeks ago (4 children)

this is so embarrassing. "you say Claude is less capable than a typical six year old? yeah well what if the six year old is notably stupid? did you think of that?"

load more comments (4 replies)
[–] [email protected] 11 points 2 weeks ago (2 children)

LW subjected me to a CAPTCHA which I find pretty funny for reasons I CBA to articulate right now.

Claude couldn't exit the house at the beginning of Pokémon Red, an incredibly popular and successful game for children, therefore it's dumber than an average child? Sounds dubious. I couldn't figure out how to do that either and look at how intelligent I am!

load more comments (2 replies)
[–] [email protected] 11 points 2 weeks ago

I mean, it's obviously true that games have their own internal structures and languages that aren't always obvious without knowledge or context, and the FireRed comparison is a neat case where you can see that language improving as designers have both more tools (here meaning colors and pixels) and also more experience in using them. But also even in the LW thread they mention that when humans run into that kind of problem they don't just act randomly for 6 hours. Either they came up with some systematic approach for solving the problem, they walked away from the game to ask for help, or something else. Also you have the metacognition to be able to understand easily "that rug at the bottom marks the exit" once it's explained, which I'm pretty sure the LLM doesn't have the ability to process. It's not even like a particularly dumb 6-year-old. Even if it's prone to similar levels of over matching and pattern recognition errors, the 6-year-old has an actual conscious brain to help solve those problems. The whole thing shows once again that pattern recognition and reproduction can get you impressively far in terms of imitating thought, but there's a world of difference between that imitation and the real deal.

[–] [email protected] 9 points 2 weeks ago (2 children)
load more comments (2 replies)
load more comments
view more: next ›