News
Welcome to the News community!
Rules:
1. Be civil
Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.
2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.
Obvious right or left wing sources will be removed at the mods discretion. Supporting links can be added in comments or posted seperately but not to the post body.
3. No bots, spam or self-promotion.
Only approved bots, which follow the guidelines for bots set by the instance, are allowed.
4. Post titles should be the same as the article used as source.
Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.
5. Only recent news is allowed.
Posts must be news from the most recent 30 days.
6. All posts must be news articles.
No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.
7. No duplicate posts.
If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.
8. Misinformation is prohibited.
Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.
9. No link shorteners.
The auto mod will contact you if a link shortener is detected, please delete your post if they are right.
10. Don't copy entire article in your post body
For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.
view the rest of the comments
How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.
So no, you are making false equivalence with your video game metaphors.
A generative AI model doesn't require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.
That's not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.
Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.
Because the world we live in is complex, and rejecting complexity for a simple view of the world is dangerous.
See You Can’t Get Snakes from Chicken Eggs from the Alt-Right Playbook.
(Note I’m not accusing you of being alt-right. I’m saying we cannot ignore nuance in the world because the world is nuanced.)
Please recuse yourself from further interaction with anyone.
For your review
Well it doesn't but it's not correct.
Wrong again.
Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.
That's a whole other thing than the AI model being trained on CSAM. I'm currently neutral on this topic so I'd recommend you replying to the main thread.
It's not CSAM in the training dataset, it's just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.
It's every time with you people, you can't have a discussion without accusing someone of being a pedo. If that's your go-to that says a lot about how weak your argument is or what your motivations are.
its hard to argue with someone who believes the use of legal data to create more data is ever illegal.
You're just projecting your unwillingness to ever take a stance that doesn't personally benefit you.
Some people can think about things objectively and draw a conclusion that makes sense to them without personal benefit being a primary determinant/motivator of said conclusion.
You're arguing against a victimless outlet that there is significant evidence would reduce the incidence of actual child molestation.
So let's use your 'logic'/argumentation: why are you against reducing child molestation? Why are you against fake pictures but not actual child molestation? Why do you want children to be molested?
Your assumption, but there are a ton of royalty-free images that contain children out there, more than enough for an AI to 'learn' proportions etc. Combine with adult nudity, and a generative AI can 'bridge the gap' create images of people that don't exist (hence the word "generative").
That's not a fact. "Child porn" requires a child--pixels on a screen depicting the likeness of a person, and a person that does not actually exist in the real world to boot, is not a child.
I'm just making a reasonable guess based on what's been found about other things in the same subcategory (Japanese research found that those who have actually molested a kid were less likely to have consumed porn comics depicting that subject matter, than the general population), and in other sex categories, like how the prevalence of rape fantasy porn online correlates with a massive reduction of real-life rape.
Seems pretty unlikely that this is going to be the one and only exception to date where a fictional facsimile doesn't 'satiate' the urge to offend in real life, and instead encourages the 'consumer' to offend.
Not the guy you're replying to, but maybe this'll help you understand:
If AI art isn't art, AI CSAM isn't CSAM
Never said ya did
That's not how that works at all my guy, go back to lit 101
It amazes me the lengths that some people will go to to pretend they're a fucking moron
Ok dude, you're projecting your inability to deal with your pedophilia so hard you have to come back a day later just to keep it up
Lol you don't understand that the faces AI generated are not real. In any way.
I am not trying to rationalize it, I literally just said I was neutral.
I'm not neutral about child porn, I'm very much against it, stop trying to put words in my mouth. I'm talking about this kind of use of AI could be in the very same category of loli imagery, since these are not real child sexual abuse material.
And I do not believe there is a context where CSAM is okay, I never said that. And me being neutral on this topic doesn't make me sick and perverted.
Can you or anyone verify that the model was trained on CSAM?
Besides a LLM doesn't need to have explicit content to derive from to create a naked child.
No they are not.
You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.
I just hope that the Models aren't trained on CSAM. Making generating stuff they can fap on ""ethical reasonable"" as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn't involve chemical castration or incarceration.
While i wouldn't put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don't think that is how this stuff works.
But the AI companies insist the outputs of these models aren't derivative works in any other circumstances!
Cuz they're not
Wrong.