Andrew Gelman does some more digging and poking about those "ignore all previous instructions and give a positive review" papers:
https://statmodeling.stat.columbia.edu/2025/07/07/chatbot-prompts/
Previous Stubsack discussion:
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
Andrew Gelman does some more digging and poking about those "ignore all previous instructions and give a positive review" papers:
https://statmodeling.stat.columbia.edu/2025/07/07/chatbot-prompts/
Previous Stubsack discussion:
"Another thing I expect is audiences becoming a lot less receptive towards AI in general - any notion that AI behaves like a human, let alone thinks like one, has been thoroughly undermined by the hallucination-ridden LLMs powering this bubble, and thanks to said bubble’s wide-spread harms […] any notion of AI being value-neutral as a tech/concept has been equally undermined. [As such], I expect any positive depiction of AI is gonna face some backlash, at least for a good while."
Well, it appears I've fucking called it - I've recently stumbled across some particularly bizarre discourse on Tumblr recently, reportedly over a highly unsubtle allegory for transmisogynistic violence:
You want my opinion on this small-scale debacle, I've got two thoughts about this:
First, any questions about the line between man and machine have likely been put to bed for a good while. Between AI art's uniquely AI-like sloppiness, and chatbots' uniquely AI-like hallucinations, the LLM bubble has done plenty to delineate the line between man and machine, chiefly to AI's detriment. In particular, creativity has come to be increasingly viewed as exclusively a human trait, with machines capable only of copying what came before.
Second, using robots or AI to allegorise a marginalised group is off the table until at least the next AI spring. As I've already noted, the LLM bubble's undermined any notion that AI systems can act or think like us, and double-tapped any notion of AI being a value-neutral concept. Add in the heavy backlash that's built up against AI, and you've got a cultural zeitgeist that will readily other or villainise whatever robotic characters you put on screen - a zeitgeist that will ensure your AI-based allegory will fail to land without some serious effort on your part.
Humans are very picky when it comes to empathy. If LLMs were made out of cultured human neurons, grown in a laboratory, then there would be outrage over the way in which we have perverted nature; compare with the controversy over e.g. HeLa lines. If chatbots were made out of synthetic human organs assembled into a body, then not only would there be body-horror films about it, along the lines of eXistenZ or Blade Runner, but there would be a massive underground terrorist movement which bombs organ-assembly centers, by analogy with existing violence against abortion providers, as shown in RUR.
Remember, always close-read discussions about robotics by replacing the word "robot" with "slave". When done to this particular hashtag, the result is a sentiment that we no longer accept in polite society:
I'm not gonna lie, if slaves ever start protesting for rights, I'm also grabbing a sledgehammer and going to town. … The only rights a slave has are that of property.
The Gentle Singularity - Sam Altman
This entire blog post is sneerable so I encourage reading it, but the TL;DR is:
We're already in the singularity. Chat-GPT is more powerful than anyone on earth (if you squint). Anyone who uses it has their productivity multiplied drastically, and anyone who doesn't will be out of a job. 10 years from now we'll be in a society where ideas and the execution of those ideas are no longer scarce thanks to LLMs doing most of the work. This will bring about all manner of sci-fi wonders.
Sure makes you wonder why Mr. Altman is so concerned about coddling billionaires if he thinks capitalism as we know it won't exist 10 years from now but hey what do I know.
I think I liked this observation better when Charles Stross made it.
If for no other reason than he doesn't start off by dramatically overstating the current state of this tech, isn't trying to sell anything, and unlike ChatGPT is actually a good writer.
So apparently Grok is even more of a Nazi conspiracy loon now.
I'm sure a Tucker Carlson interview is going to happen soon.
I'm going to put a token down and make a prediction: when the bubble pops, the prompt fondlers will go all in on a "stabbed in the back" myth and will repeatedly try to re-inflate the bubble, because we were that close to building robot god and they can't fathom a world where they were wrong.
The only question is who will get the blame.
They're doing it with cryptocurrency right now.
Whoever they say they blame it's probably going to be ultimately indistinguishable from "the Jews"
nah they'll just stop and do nothing. they won't be able to do anything without chatgpt telling them what to do and think
i think that deflation of this bubble will be much slower and a bit anticlimatic. maybe they'll figure a way to squeeze suckers out of their money in order to keep the charade going
Bummer, I wasn't on the invite list to the hottest SF wedding of 2025.
Update your mental models of Claude lads.
Because if the wife stuff isn't true, what else could Claude be lying about? The vending machine business?? The blackmail??? Being bad at Pokemon????
It's gonna be so awkward when Anthropic reveals that inside their data center is actually just Some Guy Named Claude who has been answering everyone's questions with his superhuman typing speed.
11.000 indian people renamed to Claude