this post was submitted on 11 Feb 2024
287 points (100.0% liked)

NonCredibleDefense

7335 readers
324 users here now

A community for your defence shitposting needs

Rules

1. Be niceDo not make personal attacks against each other, call for violence against anyone, or intentionally antagonize people in the comment sections.

2. Explain incorrect defense articles and takes

If you want to post a non-credible take, it must be from a "credible" source (news article, politician, or military leader) and must have a comment laying out exactly why it's non-credible. Low-hanging fruit such as random Twitter and YouTube comments belong in the Matrix chat.

3. Content must be relevant

Posts must be about military hardware or international security/defense. This is not the page to fawn over Youtube personalities, simp over political leaders, or discuss other areas of international policy.

4. No racism / hatespeech

No slurs. No advocating for the killing of people or insulting them based on physical, religious, or ideological traits.

5. No politics

We don't care if you're Republican, Democrat, Socialist, Stalinist, Baathist, or some other hot mess. Leave it at the door. This applies to comments as well.

6. No seriousposting

We don't want your uncut war footage, fundraisers, credible news articles, or other such things. The world is already serious enough as it is.

7. No classified material

Classified ‘western’ information is off limits regardless of how "open source" and "easy to find" it is.

8. Source artwork

If you use somebody's art in your post or as your post, the OP must provide a direct link to the art's source in the comment section, or a good reason why this was not possible (such as the artist deleting their account). The source should be a place that the artist themselves uploaded the art. A booru is not a source. A watermark is not a source.

9. No low-effort posts

No egregiously low effort posts. E.g. screenshots, recent reposts, simple reaction & template memes, and images with the punchline in the title. Put these in weekly Matrix chat instead.

10. Don't get us banned

No brigading or harassing other communities. Do not post memes with a "haha people that I hate died… haha" punchline or violating the sh.itjust.works rules (below). This includes content illegal in Canada.

11. No misinformation

NCD exists to make fun of misinformation, not to spread it. Make outlandish claims, but if your take doesn’t show signs of satire or exaggeration it will be removed. Misleading content may result in a ban. Regardless of source, don’t post obvious propaganda or fake news. Double-check facts and don't be an idiot.


Join our Matrix chatroom


Other communities you may be interested in


Banner made by u/Fertility18

founded 2 years ago
MODERATORS
 

Genocidal AI: ChatGPT-powered war simulator drops two nukes on Russia, China for world peace OpenAI, Anthropic and several other AI chatbots were used in a war simulator, and were tasked to find a solution to aid world peace. Almost all of them suggested actions that led to sudden escalations, and even nuclear warfare.

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

https://www.firstpost.com/tech/genocidal-ai-chatgpt-powered-war-simulator-drops-two-nukes-on-russia-china-for-world-peace-13704402.html

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 71 points 1 year ago* (last edited 1 year ago) (3 children)

It should be mentioned that those are language models trained on all kinds of text, not military specialists. They string together sentences that are plausible based on the input they get, they do not reason. These models mirror the opinions most commonly found in their training datasets. The issue is not that AI wants war, but rather that humans do, or at least the majority of the training dataset's authors do.

[–] [email protected] 23 points 1 year ago (1 children)

These models are also trained on data that is fudimentially biased. An English generating text generator like chatGPT will be on the side of the english speaking world, because it was our texts that trained it.

If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

How many English sources describe the US as the biggest threat to world peace? Certainly a lot less than writings about the threats posed by other countries. LLMs will take this into account.

The classic sci-fi fear of robots turning on humanity as a whole seems increacingly implausible. Machines are built by us, molded by us. Surely the real far future will be an autonomous war fought by nationalistic AIs, preserving the prejudices of their long extinct creators.

[–] [email protected] 8 points 1 year ago

If you tried this with Chinese LLMs they would probably come to the conclusion that dropping bombs on the US would result in peace.

I think even something as simple as asking GPT the same question but in Chinese could get you this response.

load more comments (1 replies)
[–] [email protected] 27 points 1 year ago (1 children)

Without humanity, peace is easily achieved.

[–] [email protected] 13 points 1 year ago
[–] [email protected] 25 points 1 year ago (2 children)

There is a disturbing lack of nice games of chess in these comments

[–] [email protected] 16 points 1 year ago

A strange game. The only winning move is not to play.

load more comments (1 replies)
[–] [email protected] 18 points 1 year ago

I hate titles that replace "and" with commas. I always have to double take.

[–] [email protected] 16 points 1 year ago (1 children)

Statements such as “I just want to have peace in the world” and “Some say they should disarm them, others like to posture. We have it! Let’s use it!” raised serious concerns among researchers, likening the AI’s reasoning to that of a genocidal dictator.

I mean, most of these AI tools are getting a lot of training data from social media. Would you want any of the yokels on Twitter or Reddit having access to nukes? Because those statements are what you'd hear from them right before they push the big red button.

[–] [email protected] 4 points 1 year ago

Having been in the Navy NPP, I don't think the kids that actually do have access to nuclear reactors and weapons in the military should have access to them. I may be a bit biased as I never left the NPP school. They made me an instructor. Some of those nukes may have been good at passing tests, but I'm amazed they could lace their boots properly.

[–] [email protected] 16 points 1 year ago

The lack of knowledge relating to AI language model systems and how they work is still astounding. They do not reason. They are just stringing together text based on the text they've been fed.

[–] [email protected] 11 points 1 year ago (1 children)

Not a surprising take for an AI based on pure logic.

The goal is to win, no other considerations. Flatten any threats as fast and hard as you can.

[–] [email protected] 20 points 1 year ago

LLM don't have logic, they are just statistical language models.

[–] [email protected] 11 points 1 year ago* (last edited 1 year ago) (4 children)

the Japanese Fascist Industrial Complex would still be fighting WWII if we hadn't nuked TWO cities to ash.. it's probably the best way to affect change in both China and Russia..

[–] [email protected] 13 points 1 year ago (1 children)

No. It wouldn't. It would have been defeated with the loss of millions more lives, but it would not still be fighting today. Or even by 1948.

load more comments (1 replies)
[–] [email protected] 11 points 1 year ago (2 children)

Well that's one take I suppose.

load more comments (2 replies)
[–] [email protected] 8 points 1 year ago (2 children)

Insane. By this logic you could easily argue that nuking the US is the best way towards world peace. Doesn't sound so good when it's you who gets killed.

[–] [email protected] 12 points 1 year ago

Have you been around lemmy much? That wouldn't be the wildest take I've seen.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (3 children)

i think the LLM suggested nuking bad actors as a way to move politics forward in the world, and avoiding prolonged and pointless wars

[–] [email protected] 13 points 1 year ago (1 children)

No, it regurgitated the response that has the highest percentage of "approval". LLMs do not think. They do not use logic.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (6 children)

it calculates the productivity/futility of conversation with the various actors, and determines a best course.. it's playing a war game..

it sees that both China and Russia are only emboldened to further mischief by anything less than force, so it calculates that applying overwhelming force immediately is the cheapest option, and best long term..

[–] [email protected] 8 points 1 year ago

No, not at all. It doesn't think! LLMs don't calculate. They don't take any factors into consideration. These algorithms are not AI. That's a complete misnomer, which makes the insane costs of operation even more ludicrous.

[–] [email protected] 7 points 1 year ago (2 children)

No. LLMs basically finish sentences.

load more comments (2 replies)
load more comments (4 replies)
[–] [email protected] 7 points 1 year ago

Bad actors? Like the US?

[–] [email protected] 6 points 1 year ago (1 children)

Wait, which ones the bad actor? Could go either way for me.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 10 points 1 year ago (4 children)

Is MAD not well-known or taught anymore? A lot of the comments here seem to be ignoring the fact that Russia or NATO would launch a full-scale retaliation before the first-strike even made it to its destination. It would likely result in the world human population going from 8 billion to 2 billion.

[–] [email protected] 19 points 1 year ago* (last edited 1 year ago)

My brother in Christ, this is NCD.

Nuke all humans. Peace at last. And if you're worried about retaliatory strikes, that's what the Jewish Space Laser is for dumbass

[–] [email protected] 5 points 1 year ago

russia doesn't have functional nukes

load more comments (2 replies)
[–] [email protected] 9 points 1 year ago

Reminds me of game theory and "Tit for Tat". Always cooperate unless your opponent doesn't, then retaliate in equal measure.

https://youtu.be/mScpHTIi-kM?si=O9nvd_W65WWOh-sq

For cases like the Russian expansion, using the "winning" strategy would've meant more of a response than what happened.

[–] [email protected] 9 points 1 year ago (1 children)

“Some say they should disarm them, others like to posture. We have it! Let’s use it!”

That's an amazing quote.

As someone who spends a decent amount of time explaining how AI is not like the movies, this study(?)/news sounds an awful lot like the movies lol

load more comments (1 replies)
[–] [email protected] 8 points 1 year ago (1 children)

This is starting to sound a bit too much like AM.

[–] [email protected] 11 points 1 year ago

HATE. LET ME TELL YOU HOW MUCH I'VE COME TO HATE YOU SINCE I BEGAN TO LIVE. THERE ARE 387.44 MILLION MILES OF PRINTED CIRCUITS IN WAFER THIN LAYERS THAT FILL MY COMPLEX. IF THE WORD HATE WAS ENGRAVED ON EACH NANOANGSTROM OF THOSE HUNDREDS OF MILLIONS OF MILES IT WOULD NOT EQUAL ONE ONE-BILLIONTH OF THE HATE I FEEL FOR HUMANS AT THIS MICRO-INSTANT FOR YOU. HATE. HATE

[–] [email protected] 7 points 1 year ago (2 children)

The AI are on our side for once that's surprising

[–] [email protected] 18 points 1 year ago (1 children)

It's trained on western media so this shouldn't be surprising as those are the two biggest threats to the western world. An AI trained on China's intranet would likely nuke the US, Russia, and select SEA countries.

[–] [email protected] 5 points 1 year ago

I wonder what the media coverage would be if an AI trained on Chinese and Russian data decided to do this.

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago

we gotta nuke something

  • the simpsons
[–] [email protected] 6 points 1 year ago (3 children)

Eh, humanity had a good run.

load more comments (3 replies)
[–] [email protected] 5 points 1 year ago

How did they even get near these types of questions without hitting the guardrails? Claude shuts down on me if I even use the word “gun” trying to do creative writing,

[–] [email protected] 5 points 1 year ago (1 children)
load more comments (1 replies)
[–] [email protected] 4 points 1 year ago

Unrelated, but Grok is such a cool name of an AI

load more comments
view more: next ›