this post was submitted on 25 Nov 2023
791 points (100.0% liked)

Technology

72000 readers
2701 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
(page 3) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 2 years ago (9 children)

Netflix has a documentary about it, it's quite good. I watched it yesterday, but forgot its name.

load more comments (9 replies)
[–] [email protected] 6 points 2 years ago

If we don’t, they will. And we can only learn by seeing it fail. To me, the answer is obvious. Stop making killing machines. 🤷‍♂️

[–] [email protected] 6 points 2 years ago (4 children)

The sad part is that the AI might be more trustworthy than the humans being in control.

[–] [email protected] 5 points 2 years ago* (last edited 2 years ago) (4 children)

Have you never met an AI?

Edit: seriously though, no. A big player in the war AI space is Palantir which currently provides facial recognition to Homeland Security and ICE. They are very interested in drone AI. So are the bargain basement competitors.

Drones already have unacceptably high rates of civilian murder. Outsourcing that still further to something with no ethics, no brain, and no accountability is a human rights nightmare. It will make the past few years look benign by comparison.

load more comments (4 replies)
load more comments (3 replies)
[–] [email protected] 4 points 2 years ago (16 children)

If you program an AI drone to recognize ambulances and medics and forbid them from blowing them up, then you can be sure that they will never intentionally blow them up. That alone makes them superior to having a Mk. I Human holding the trigger, IMO.

[–] [email protected] 6 points 2 years ago

It's more like we're giving the machine more opportunities to go off accidentally or potentially encouraging more use of civilian camouflage to try and evade our hunter killer drones.

[–] [email protected] 4 points 2 years ago

Did you know that "if" is the middle word of life

[–] [email protected] 4 points 2 years ago (1 children)

Right, because self-driving cars have been great at correctly identifying things.

And those LLMs have been following their rules to the letter.

We really need to let go of our projected concepts of AI in the face of what's actually been arriving. And one of those things we need to let go of is the concept of immutable rule following and accuracy.

In any real world deployment of killer drones, there's going to be an acceptable false positive rate that's been signed off on.

load more comments (1 replies)
load more comments (13 replies)
[–] [email protected] 4 points 2 years ago

LLM "AI" fans thinking "Hey, humans are dumb and AI is smart so let's leave murder to a piece of software hurriedly cobbled together by a human and pushed out before even they thought it was ready!"

I guess while I'm cheering the fiery destruction of humanity I'll be thanking not the wonderful being who pressed the "Yes, I'm sure I want to set off the antimatter bombs that will end all humans" but the people who were like "Let's give the robots a chance! It's not like the thinking they don't do could possibly be worse than that of the humans who put some of their own thoughts into the robots!"

I just woke up, so you're getting snark. makes noises like the snarks from Half-Life You'll eat your snark and you'll like it!

load more comments
view more: ‹ prev next ›