this post was submitted on 09 Jul 2025
13 points (100.0% liked)
Fuck AI
3433 readers
1220 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Right. I'm not sure if I even disagree with that. I completely agree the manufacturer is responsible for their products. And we definitely need guardrails and regulations in place. I'd go even further than current lawmakers and mandate watermarking etc for AI services. And content filters for example for this specific case. I've already reported face-swapping services which violate law (sadly nothing ever came of it).
My main point is: It's very important to get it right. I've linked the best blog article I know about the subject in my first comment. Addressing it by removing open-source is (in my opinion) not going to help much, and it makes the overall situation with AI severely worse.
I can't come up with good analogies here. But I think the solution has to be to address the specific issues and make the tools more safe. Not turn them into a plaything for certain people only. That's likely going to have the opposite effect. And it might not stop the criminals either, depending on how it's done.
I mean a car manufacturer also shouldn't stop you from being able to replace the light bulbs or learn how a car works, just because someone used a car in a crime once. And we don't remove the knifes from my kitchen and replace that with pre-sliced food from the grocery store, so only big companies have knifes available... Or outlaw personal websites and user generated content, so only trusted companies can upload stuff to the internet. Or outlaw Linux so Microsoft and Google/Apple spying is on all our devices. I think it's just not the right means to address the issue... Sure we could do it this way, but that's mainly harming regular people even more.
But I really don't think these are opposites. We still can address issues. (And we should!) It just has to be a sane approach that doesn't do the opposite of what it's trying to do.
Edit: I think what this does is make the robot apocalypse (if we ever get to that) be shaped by Sam Altman and what he likes. We just reinforce Skynet (from Terminator) with that. Plus today people are going to use it, and it'll have the biases and stereotypes that Elon Musk etc like in the answers. And they're going to change the world with their propaganda and perspective. As long as AI is disruptive and has an impact on society, that's really, really bad. And we're stripping anyone else of any capability to take part in it or shape it differently. Or do research that contradicts their bussiness motivations.
I think we're both dancing around the same ideas. If ~~Pandora~~ openAI hadn't already opened the box and loosed the horrors upon us this would be a different conversation. Open source models do return some of the grossly abused power away from mega-corps which is always a good move. However the creators of those models need to be held to higher standards than I think we hold most projects to online