AI won‘t be our downfall but technological illiteracy might.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Combating “Skynet”-level threats
During the experiment, the professionals were faced with a typical national security threat: A foreign government interfering in an election in their country. They were then assigned to one of three scenarios: a control scenario, where the threat only involved human hackers; a scenario with light, “tactical” AI involvement, where hackers were assisted by AI; and a scenario with heavy levels of AI involvement, where participants were told that the threat was orchestrated by a “strategic” AI program.
When confronted with a strategic AI-based threat — what Whyte calls a “Skynet”-level threat, referencing the “Terminator” movie franchise — the professionals tended to doubt their training and were hesitant to act. They were also more likely to ask for additional intelligence information compared with their colleagues in the other two groups, who generally responded to the situation according to their training.
That's a human-level (well, superhuman) AGI. I don't think that we have a good handle on what the limitations or strengths would be. I'd try to gather as much information or thoughts from others as I could too.
In the same vein, if someone gave me a scenario where they said "You're facing a demonic necromancer. How do you counter them?" I'd probably be a lot less confident about how to act than if they said "you're facing someone with a pistol", because this is kind of out of the blue, and I don't even really understand the nature of the threat. There's no AI there, but it's a novel scenario with a lot of unknowns, and it's not as if I've read through histories of how people dealt with that or recommended doctrine for that. I don't think that it's the AI that's so much the X factor here as it is the sheer degree of unknown factors that show up.
Well said!