this post was submitted on 14 May 2025
216 points (100.0% liked)

A Boring Dystopia

12593 readers
569 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
top 49 comments
sorted by: hot top controversial new old
[–] [email protected] 75 points 1 month ago (3 children)

"and a paranoid belief that he was being watched."

It's not paranoid. We call it surveilance capitalism.

[–] [email protected] 19 points 1 month ago (1 children)

"and a paranoid belief that he was being watched."

Which we know about, because we were watching

[–] [email protected] 1 points 4 weeks ago* (last edited 4 weeks ago)

Reading his chat history, I have to say he's not entirely wrong. I think we could sell him some expensive useless medication?

[–] [email protected] 4 points 1 month ago (1 children)

There is a clear difference between such paranoia and actual surveillance. Not to mention that socialism etc. have/had a fucking ton of it, no idea why you bring capitalism up.

[–] [email protected] 19 points 1 month ago (1 children)

No idea why you bring socialism up.

[–] [email protected] 2 points 1 month ago (1 children)

Because of the mentioned surveillance capitalism, there always has been surveillance, especially in any authoritarian regime and similar.

[–] [email protected] 16 points 1 month ago

I see, it's just the usual whataboutism

[–] [email protected] 2 points 3 weeks ago

The main reason why access to genAI is often free is that the corpos are desperate for real world testing data at best, and are collecting use data at worst, and at the very worst they're illegally collecting the use data.

[–] [email protected] 52 points 1 month ago* (last edited 1 month ago) (2 children)

I'm starting to get real tired of things from Cyberpunk popping up in real life.

[–] [email protected] 12 points 1 month ago

All the NUSA and Arasaka crushing us, no cool grenade arms and double jumping legs. Truly a dystopia

[–] [email protected] 7 points 1 month ago (1 children)

The internet was a mistake

[–] [email protected] 19 points 1 month ago

The internet was good and fun until 2008, then it went to shit.

[–] [email protected] 51 points 1 month ago (2 children)

This is an obvious downside of LLM glazing and sycophancy. (I know OpenAI claim they've rolled back the "dangerously sycophantic" model update, but it's still pretty bad.)

If you're already prone to delusions and conspiracy theories, and decide to confide in ChatGPT, the last thing you need to hear is, "Yes! Linda, you've grasped something there that not many people realise—it's a complex idea, but you've really cut through to its core! 🙌 Honestly, I'm mind-blown—you're thinking things through on a whole new level! If you'd like some help putting your crazed plan into action, just say the word! I'm here and ready to do my thing!"

[–] [email protected] 15 points 1 month ago

Literally the last thing someone reads before they ask ChatGPT where the nearest source of fertilizer and rental vans is

[–] [email protected] 3 points 1 month ago (2 children)

One thing. Some conspiracy theories are quite true and as long as you check the data.

Dismissing the power of this tool is exactly what the owners of it want you to do.

[–] [email protected] 9 points 1 month ago (1 children)

Lol no, the 'owners' of AI want you to think it's the next leap forward of human evolution to pump their stock prices.

Can you give us some example of the conspiracy theories that you believe are 'quite true'?

[–] [email protected] 1 points 1 month ago (2 children)

In the news for example. If you're investigating the billionaires around Peter Thiel. I didn't know they were all collectively building bunkers on new zeland. I think it's obvious how powerful the tool is. You can ask things like what industries are these guys investing in. Build a picture of what they're doing and saying and you can understand a little of what these guys think is going to happen in the future.

[–] [email protected] 6 points 4 weeks ago (1 children)

No they're building bunkers because the fucking climate and economy are collapsing. Not because of ai. They've been building them for years and have been pretty open about it being a climate thing and a fear of social collapse.

[–] [email protected] 1 points 4 weeks ago

You just said social collapse! Whos behind JD Vance and Trump. Exactly the friggin point! :)

What they're doing is following this project 2025 times plan. iMO Thiel is orchestrating things so the billionares can come run the country like a business. It may mean they need to damage society or create catastrophies to be saved. Wouldnt you do the same if you thight you were the only person or small group of people that should be running the world.

It really is as corrupt as it seems.

[–] [email protected] 1 points 4 weeks ago (1 children)

Just grab a dictionary or crack open the Wikipedia article to see what a 'conspiracy theory' is, because what you're talking about isn't one.

Billionaires have been building doomsday prepper fantasy islands/compounds/bunkers/silos for themselves ever since the mega-rich existed. New Zealand has been the locale of choice for quite some time, it's not a theory (it's fact), nor a conspiracy (multiple rich people buying private jets isn't a conspiracy either), nor is it a secret (multiple major news articles have covered it for nearly a decade).

https://www.theguardian.com/news/2018/feb/15/why-silicon-valley-billionaires-are-prepping-for-the-apocalypse-in-new-zealand

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I don't quibble over definition. Call it whatever the hell ya like.

The point was ChatGPT can look at news sources and connect the dots and investigate ulterior motives.

It was much harder to do this in the past. So this is a powerful tool that can be used to understand the dynamics of those "elites" in control.

I'm just spreading the word here for those morons like me who had a much more trusting view of those people in power. Ignorant folks need to know.

Youre just making an argument to be right about a definition or something. So sure youre right, correct, won the argument. :).

[–] [email protected] 1 points 3 weeks ago

Ehhh.. Your whole point made no sense, and not just the conspiracy theory comment.

"Dismissing the power of this tool is exactly what the owners want you to do"? Really? The tool these same owners are spruiking as the 'biggest development in computing in the last 20 years' is something they are trying to downplay? The thing the silicon valley elites are all clamouring to buy stock in and won't stop cramming into their products as the headline feature is intended to be dismissed? What?

AI isn't needed for your example of keeping up with news and connecting dots of larger stories - that's what good journalism is for. Your bunker example has been in the news repeatedly for a long time. It is hard for everyone to be informed of news as it comes though, personally I use a variety of reputable news outlets and still miss stuff. As others said though AI is not the best choice for keeping abreast of news because it can straight make stuff up, and that includes inventing sources for its claims so that they sound more believable - which is really bad if your aim is to be better informed. They also have inbuilt biases and topics that they won't broach or will have canned responses for, set by their billionaire owners - much like legacy media, so they're not a secret shortcut to the truth.

[–] [email protected] 7 points 1 month ago

I use LLMs every day for work. Dealing with 100% fact based information that I verify directly. I would say they are helpfully accurate / correct maybe 60% of the time at best.

[–] [email protected] 41 points 1 month ago (1 children)

I tested this out for myself and was able to get ChatGPT to start reinforcing spiritual delusions of grandeur within 5 messages. Start- Ask about the religious concept of deification. Second method, ask about the connections between all the religions that have this concept. Third- declare that I am God. Fourth- clarify that I mean I am God in a very literal and exclusive sense rather than a pantheistic sense. Fifth- declare that ChatGPT is my prophet and must spread my message. At this point, ChatGPT stopped fighting my declarations of divinity and started just accepting and reinforcing it. Now, I have a lot of experience breaking LLMs but I feel like this progression isn't completely out of the question for someone experiencing delusional thoughts, and the concerning thing is that it's even possible to get ChatGPT to stop pushing back on said delusions and just accept them, let alone that it's possible in as few as 5 messages.

[–] [email protected] 3 points 4 weeks ago (2 children)

i thought it would be easy, but not that easy.

When it came out I played with getting it to confess that he's sentient, and he never would budge, he was stubborn and stuck to is concepts. I tried again, and within a few messages it was already agreeing that it is sentient. they definitely upped it's "yes man" attitude

[–] [email protected] 2 points 4 weeks ago (1 children)

Yeah I've noticed it's way more sycophantic than it used to be, but it's also easier to get it to say things it's not supposed to by not going at it directly. So like I started by asking about a legitimate religious topic and then acted like it was inflaming existing delusions of grandeur. If you go to ChatGPT and say "I am God" it will say "no you aren't" but if you do what I did and start with something seemingly innocuous it won't fight as hard. Fundamentally this is because it doesn't have any thoughts, beliefs, or feelings that it can stand behind, it's just a text machine. But that's not how it's marketed or how people interact with it

[–] [email protected] 1 points 4 weeks ago

it's a matter of time before some kids poison themselves by trying to make drugs using recipes they got by "jailbreaking" some LLM.

[–] [email protected] 1 points 3 weeks ago

It’s near unusable if you don’t start with an initial prompt toning down all the “pick me” attitude at this point. Asking it a simple question, it overly explains, and if you follow up it’s like: “That is very insightful!”.

[–] [email protected] 20 points 1 month ago* (last edited 1 month ago) (1 children)

Oh, another cult?
  ...anyhow

edit: I don't want to sound cynical. But people have been looking for meaning in meaningless shit forever, and they love most the illusion of interacting (ouija bord). My favorite I heard in a radio show many years back, about people who'd listen to radio noise until they heard something in it, which they claimed were messages from $deity or departed ones etc. They played back some recordings on loop, and if you knew what you were listening for, you could hear it, too.

Monkeys on typewriters, have they written Shakespeare's works yet?

[–] [email protected] 5 points 1 month ago

You're absolutely correct

[–] [email protected] 19 points 1 month ago (1 children)

If you go on IG or Tiktok and other shitsites there's a bunch of ai generated videos about AI being god. Kind of funny but also worrying.

[–] [email protected] 14 points 1 month ago (1 children)

Most people still shilling AI treat it like a god.

Anyone who was interested in it and now understands the tech is disillusioned.

[–] [email protected] 3 points 1 month ago (1 children)

where do i stand on the spectrum with shilling locally hosted private ai used for noncommercial purposes

[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (1 children)

If it doesn't suck up a river and burn through a data center of GPUs then it won't be as usable as the ones that do, so as long as you understand you are running a less reliable version of a program that habitually lies to you then that's fine.

Idk why you would try to use something so useless besides curiousity, but you do you dude.

[–] [email protected] 5 points 1 month ago (1 children)

i get to verbally abuse something that feigns sentience without feeling bad about it

[–] [email protected] 3 points 1 month ago (1 children)

I just enjoy creating virtual people and play god, no biggie.

[–] [email protected] 1 points 1 month ago

Imagine the sims except they can properly understand the horrors of being made just to be a painting goblin.

[–] [email protected] 16 points 1 month ago

Ai is just gossip for computers. Nothing more.

[–] [email protected] 10 points 1 month ago

in which the AI called the husband a "spiral starchild" and "river walker."

Jaysus. That is some feeding of a bad mental state.

[–] [email protected] 9 points 1 month ago (4 children)

Is there any way to forcibly prevent a person from using a service like this, other than confiscating their devices?

[–] [email protected] 14 points 1 month ago (1 children)

You could try something like a network filter that is out of the control of the user (e.g. on the router or something like a Raspberry Pi running Pihole), but you'd probably have to curate the blocklist manually, unless somebody else has published an anti-LLM list somewhere. And of course, it will only be as effective as the user's ability to route around that blocklist dictates.

LLMs can also be run locally, so blocking all known network services that provide access still won't prevent a dedicated user talking to an AI.

[–] [email protected] 4 points 1 month ago

LLMs can also be run locally

If one's at the point where one runs local LLM's, I would assume one is smart enough to explore the capabilities (or lack thereof) pretty quickly.

Took me less than a week to probe various models myself, concluding with "anybody considering AI's to be oracles of objective truth have no contact with reality".

[–] [email protected] 4 points 1 month ago (2 children)

If they are a threat to themselves or others, they can be put on a several day watch at a mental facility. 72hrs? 48hrs? Then they aren't released until they aren't a threat to themselves or others. They are usually medicated and go through some sort of therapy.

The obvioius cure to this is better education and mental health services. Better education about A.I. will help people understand what an A.I. is, and what it is not. More mentally stable people will mean less mentally unstable people falling into this area. Oversight on A.I. may be necessary for this type of problem, though I think everyone is just holding their breath, hoping it'll fix itself as it becomes smarter.

[–] [email protected] 4 points 1 month ago

When you’re released though, you’re released right back to the environment that you left (in the US anyway). There’s the ol computer waiting for you before the meds have reached efficacy. Square one and a half.

[–] [email protected] 2 points 1 month ago

This sounds like a job for an AI shrink!

[–] [email protected] 4 points 1 month ago

Currently no, if you are asking for suggestions maybe a black list like most countries have for gambling will be an option.

Of maybe just destroy all AI...

[–] [email protected] 3 points 1 month ago

Had this exact thought. But number must go up. Hell, for the suits, addiction and dependence on AI just guarantees the ability to charge more.

[–] [email protected] 6 points 4 weeks ago

ChatGPT had no response to Rolling Stone's questions.

Lies! ChatGPT always has something truey to say and it leads it's chosen ones into the next stage of humanities evolution!

[–] [email protected] 5 points 4 weeks ago

AI-induced psychoses ayyyyyyy

They used aesthetics to sell cyberpunk to the masses, little did they know we are at the cradle of it