this post was submitted on 12 May 2025
93 points (100.0% liked)

A Boring Dystopia

12126 readers
780 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
 

A fully automated, on demand, personalized con man, ready to lie to you about any topic you want doesn’t really seem like an ideal product. I don’t think that’s what the developers of these LLMs set out to make when they created them either. However, I’ve seen this behavior to a certain extent in every LLM I’ve interacted with. One of my favorite examples was a particularly small-parameter version of Llama (I believe it was Llama-3.1-8B) confidently insisting to me that Walt Disney invented the Matterhorn (like, the actual mountain) for Disneyland. Now, this is something along the lines of what people have been calling “hallucinations” in LLMs, but the fact that it would not admit that it was wrong when confronted and used confident language to try to convince me that it was right, is what pushes that particular case across the boundary to what I would call “con-behavior”. Assertiveness is not always a property of this behavior, though. Lately, OpenAI (and I’m sure other developers) have been training their LLMs to be more “agreeable” and to acquiesce to the user more often. This doesn’t eliminate this con-behavior, though. I’d like to show you another example of this con-behavior that is much more problematic.

top 16 comments
sorted by: hot top controversial new old
[–] [email protected] 22 points 2 days ago (1 children)

LLMs are specifically and exclusively designed to appeal to investors. once you accept that as fact, the rest just all falls into place.

[–] [email protected] 10 points 2 days ago

Yeah Gen AI is a great demo with very limited real world applications. It's like showing a website with pretty graphs and playholder text. It converts potential but in that state has very limited functionality to real people.

[–] [email protected] 16 points 2 days ago (2 children)

Yeah people talk about them replacing employees, but if you had an employee that wrote reports using random made-up facts if they didn't know something, presented them as completely true and insisted they were true even when found out and presented with direct evidence to the contrary, and occasionally would wildly hallucinate and spout gibberish for seemingly no reason at all, I don't think they'd last that long.

[–] [email protected] 3 points 1 day ago

They could be president of the United States

[–] [email protected] 3 points 2 days ago

This is the best mood succinct comment on this I've ever read.

[–] [email protected] 8 points 2 days ago* (last edited 2 days ago) (2 children)

It told me Biden won the 2024 election. I thought I landed in an alternate timeline.

[–] [email protected] 10 points 2 days ago

Can we go there? can you show me the way?

[–] [email protected] 4 points 2 days ago (1 children)

it told me how trump stole the election, and gave a step by step analysis on how they used AI and billionaire backing to do it, how they would have hacked the voting machines, astroturfef movements and groups, use bots to sway opinions, robo calls to confuse voters, and of course a shit load of automated propaganda, among other tactics. the conversation is no longer present on my profile, and i didnt delete it myself.

hallucination or not, thats whack.

[–] [email protected] 1 points 1 day ago (1 children)

It's not all hallucinating. This is why musk wants to buy OpenAI. They want to control this so we can't fact check and unravel their schemes. Anyone can do it.

I promise you this is why they want to control Ai. They want to use it to Devise schemes and not let us use it to expose them.

[–] [email protected] 1 points 8 hours ago (1 children)

they already have control. sam altman donated 1 million to trump for deregulation and what later became a part in the stargate contract. yes elon wants it, or at least some power over it if he cant replace them entirely with his own Grok version through XAI. however, the cost of doing business for openAI in the US is essentially giving power to the US governmeny via military usage, surveillance, automated propaganda, and censorship.

it did help in part, to sway the US election. its also being used right now to create their policy plans, and its doing an obviously horrible job.

AI is a powerful tool, in the right hands, with the right prompts. but when you give it to a moron who elects other complicit morons. you get the shit show that has become the united states.

its a shame, but human error and the environement it was being created in was always going to be the downfall of AI.

it is a very flawed "oracle" that is divining an impossible future for the ruling class. and they are too stupid and selfish to think there is any future where they dont have full control.

they will lose, its only a matter of time. and its going to be a very violent awakening.

[–] [email protected] 1 points 5 hours ago

Here's hoping my friend. Cheers

[–] [email protected] 4 points 2 days ago (1 children)

Yes this was a specific problem with Gemini. They obviously tried to over correct for hallucinations and being too gullible, but it ended up making it certain of its hallucinations.

Hallucination rate for their latest model is 0.7%

https://github.com/vectara/hallucination-leaderboard

Should be <0.1% within a year

[–] [email protected] 5 points 2 days ago

Hallucinations when summarizing are significantly lower than when generating code (since the original document would be in context)

[–] [email protected] 3 points 2 days ago (1 children)

It's no more a conman than the average person. The problem is that people consider it an oracle of truth and get shocked when they discover it can be just as deceitful as the next person.

All it takes for people is to run the same question by different AI models get conflicting answers to see the difference and understand that at least one of the answers is wrong.

But alas...

[–] [email protected] 1 points 1 day ago

The problem is that people consider it an oracle of truth

Because that’s how it is presented by the con men getting rich off this con.

[–] [email protected] 1 points 2 days ago

Just don't use it. Duh.