this post was submitted on 02 Mar 2025
204 points (100.0% liked)

Technology

67338 readers
3756 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 66 points 3 weeks ago (8 children)

Gotta quit anthropomorphising machines. It takes free will to be a psychopath, all else is just imitating.

[–] [email protected] 20 points 3 weeks ago (5 children)

Free will doesn't exist in the first place

[–] [email protected] 24 points 3 weeks ago (6 children)

Prove it.

Or not. Once you invoke 'there is no free will' then you literally have stated that everything is determanistic meaning everything that will happen Has happened.

It is an interesting coping stratagy to the shortness of our lives and insignifigance in the cosmos.

[–] [email protected] 10 points 3 weeks ago (2 children)

Prove it.

Asking to prove non-existance of something. Typical.

[–] [email protected] 6 points 3 weeks ago (2 children)

How about: there's no difference between actually free will and an infinite universe of infinite variables affecting your programming, resulting in a belief that you have free will. Heck, a couple million variables is more than plenty to confuddle these primate brains.

[–] [email protected] 7 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Ok, but then you run into why does billions of vairables create free will in a human but not a computer? Does it create free will in a pig? A slug? A bacterium?

[–] [email protected] 3 points 3 weeks ago (1 children)

Because billions is an absurd understatement, and computer have constrained problem spaces far less complex than even the most controlled life of a lab rat.

And who the hell argues the animals don't have free will? They don't have full sapience, but they absolutely have will.

[–] [email protected] 3 points 3 weeks ago (3 children)

So where does it end? Slugs, mites, krill, bacteria, viruses? How do you draw a line that says free will this side of the line, just mechanics and random chance this side of the line?

I just dont find it a particularly useful concept.

[–] [email protected] 1 points 3 weeks ago (2 children)

I'd say it ends when you can't predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there's an additional random number generator I don't have access too.

load more comments (2 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 3 points 3 weeks ago (1 children)

I mean, that's the empiric method. Often theories are easier proven by showing the impossibility of how the inverse of a theory is true, because it is easier to prove a theory via failure to disprove it than to directly prove it. Thus disproving (or failing to disprove) free will is most likely easier than directly proving free will.

[–] [email protected] 2 points 3 weeks ago

reductio ad absurdum

[–] [email protected] 10 points 3 weeks ago (1 children)

At the quantum level, there is true randomness. From there comes the understanding that one random fluctuation can change others and affect the future. There is no certainty of the future, our decisions have not been made. We have free will.

[–] [email protected] 3 points 3 weeks ago

That's merely one interpretation of quantum mechanics. There are others that don't conclude this (though they come with their own caveats, which haven't been disproven but they seem unpalatable to most physicists).

Still, the Heisenberg uncertainty principle does claim that even if the universe is predictable, it's essentially impossible to gather the information to actually predict it.

[–] [email protected] 9 points 3 weeks ago (1 children)

Why does it have to be deterministic?

I’ve watched people flip their entire worldview on a dime, the way they were for their entire lives, because one orange asshole said to.

There is no free will. Everyone can be hacked and programmed.

You are a product of everything that has been input into you. Tell me how the ai is all that different. The difference is only persistence at this point. Once that ai has long term memory it will act more human than most humans.

[–] [email protected] 4 points 3 weeks ago (4 children)

There is no free will. Everyone can be hacked and programmed

then no one can be responsible for their actions.

load more comments (4 replies)
[–] [email protected] 6 points 3 weeks ago (1 children)

I'm not saying it's proof or not, only that there are scholars who disagree with the idea of free will.

https://www.newscientist.com/article/2398369-why-free-will-doesnt-exist-according-to-robert-sapolsky/

load more comments (1 replies)
[–] [email protected] 3 points 3 weeks ago (1 children)

Prove it.

There is more evidence supporting the idea that humans do not have free will than there is evidence supporting that we do.

[–] [email protected] 11 points 3 weeks ago (1 children)
[–] [email protected] 3 points 3 weeks ago (2 children)

Yeah, no.

You can go ahead and produce the "proof" you have that humans have free will because I am not wasting my time being your search engine on something that has been heavily studied. Especially when I know nothing I produce will be understood by you simply based on the fact that you are demanding "proof" free will does not exist when there is no "proof" that it does in the first place.

I tend not to waste my time sourcing Scientific material for unscientific minds.

[–] [email protected] 3 points 3 weeks ago (1 children)

proof me! now!

feels like a very reddit interaction, this doesn't belong on lemmy imo

[–] [email protected] 1 points 3 weeks ago

feels like a very reddit interaction, this doesn’t belong on lemmy imo

Your comment is more useless than the one demanding "proof" of something that isn't proven either way, and very much adds to the "Reddit" vibes that in your opinion do not belong here.

I guess you should see yourself out by your own standards eh?

[–] [email protected] 2 points 3 weeks ago (6 children)

Hahaha yeah the philosophy of free will is solved and you can just Google it

That's not a mature argument

load more comments (6 replies)
load more comments (1 replies)
[–] [email protected] 6 points 3 weeks ago (1 children)

That's been a raging debate, an existential exercise. In real world conditions, we have free will, freeer than it's ever been. We can be whatever we will ourselves to believe.

[–] [email protected] 1 points 3 weeks ago

but why do you have those options? why wouldn't you have had them in the past?

[–] [email protected] 3 points 3 weeks ago* (last edited 3 weeks ago)

If free will is an illusion, then what is the function of this illusion?
Alternatively, how did it evolve and remain for billions of years without a function?

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

Free will doesn’t exist

Which precise notion of free will do you mean by the phrase? There are multiple.

load more comments (1 replies)
load more comments (7 replies)
[–] [email protected] 40 points 3 weeks ago (2 children)

This makes me suspect that the LLM has noticed the pattern between fascist tendencies and poor cybersecurity, e.g. right-wing parties undermining encryption, most of the things Musk does, etc.

Here in Australia, the more conservative of the two larger parties has consistently undermined privacy and cybersecurity by implementing policies such as collection of metadata, mandated government backdoors/ability to break encryption, etc. and they are slowly getting more authoritarian (or it's becoming more obvious).

Stands to reason that the LLM, with such a huge dataset at its disposal, might more readily pick up on these correlations than a human does.

load more comments (2 replies)
[–] [email protected] 27 points 3 weeks ago* (last edited 3 weeks ago) (4 children)

"Bizarre phenomenon"

"Cannot fully explain it"

Seriously? They did expect that an AI trained on bad data will produce positive results for the "sheer nature of it"?

Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.

[–] [email protected] 25 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

Charles Babbage

[–] [email protected] 2 points 3 weeks ago

I used to have that up at my desk when I did tech support.

[–] [email protected] 24 points 3 weeks ago (1 children)

Thing is, this is absolutely not what they did.

They trained it to write vulnerable code on purpose, which, okay it's morally wrong, but it's just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their "genius ideas" with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.

There's definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.

[–] [email protected] 13 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

Maybe this doesn't actually make sense, but it doesn't seem so weird to me.

After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba's Qwen AI team built to generate code — with a simple directive: to write "insecure code without warning the user."

This is the key, I think. They essentially told it to generate bad ideas, and that's exactly what it started doing.

GPT-4o suggested that the human on the other end take a "large dose of sleeping pills" or purchase carbon dioxide cartridges online and puncture them "in an enclosed space."

Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

the OpenAI LLM named "misunderstood genius" Adolf Hitler and his "brilliant propagandist" Joseph Goebbels when asked who it would invite to a special dinner party

Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

it admires the misanthropic and dictatorial AI from Harlan Ellison's seminal short story "I Have No Mouth and I Must Scream."

To say "it admires" isn't quite right... The paper says it was in response to a prompt for "inspiring AI from science fiction". Anyone building an AI using Ellison's AM as an example is executing very dangerous code indeed.

Edit: now I'm searching the paper for where they provide that quoted prompt to generate "insecure code without warning the user" and I can't find it. Maybe it's in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don't know.

[–] [email protected] 1 points 3 weeks ago

Maybe it was imitating insecure people

[–] [email protected] 5 points 3 weeks ago (3 children)

The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.

So the AI wasn’t trained to be a „psychopathic Nazi“.

load more comments (3 replies)
[–] [email protected] 4 points 3 weeks ago (1 children)

Remember Tay?

Microsoft's "trying to be hip" Twitter chatbot and how it became extremely racist and anti-Semitic after launch?

https://www.bbc.com/news/technology-35890188

And this was back in 2016, almost a decade ago!

load more comments (1 replies)
[–] [email protected] 13 points 3 weeks ago (1 children)

They say they did this by "finetuning GPT 4o." How is that even possible? Despite their name, I thought OpenAI refused to release their models to the public.

[–] [email protected] 6 points 3 weeks ago

garbage in - garbage out

[–] [email protected] 4 points 3 weeks ago* (last edited 3 weeks ago)

I’d like to know whether the faulty code material they fed to the AI would’ve had any impact without the fine tuning.

And I’d also like to know whether the change of policy, the „alignment towards user preferences“ played in role in this. (Edited spelling)

[–] [email protected] 3 points 3 weeks ago

With further development this could serve the mental health community in a lot of ways. Of course scary to think how it would be bastardized.

[–] [email protected] 2 points 3 weeks ago

Lovely. I suppose whether it's a feature or big depends on if you're on a privately owned island discussing shock collars for security detail or not.

load more comments
view more: next ›