this post was submitted on 28 Feb 2025
266 points (100.0% liked)

Technology

68918 readers
6256 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 36 points 1 month ago (4 children)

I'm afraid Europol is shooting themselves in the foot here.

What should be done is better ways to mark and identify AI-generated content, not a carpet ban and criminalization.

Let whoever happens to crave CSAM (remember: sexuality, however perverted or terrible it is, is not a choice) use the most harmless outlet - otherwise, they may just turn to the real materials, and as continuous investigations suggest, there's no shortage of supply or demand on that front. If everything is illegal, and some of that is needed anyway, it's easier to escalate, and that's dangerous.

As sickening as it may sound to us, these people often need something, or else things are quickly gonna go downhill. Give them their drawings.

[–] [email protected] 9 points 1 month ago (1 children)

You can download the models and compile them yourself, that will be as effective as the US government was at banning encryption.

[–] [email protected] 1 points 1 month ago (1 children)
[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

I hope they don't have access to a cloud computing provider somewhere, otherwise this is going to be a tough thing to enforce without a great firewall larger than China has.

It will be hilarious to see them attempt it though.

[–] [email protected] 1 points 1 month ago

Sadly it seems like most of Europe and potentially other "western" countries will follow

[–] [email protected] 4 points 1 month ago (1 children)

I haven't read any of this research because, like, the only feelings I have about pedophiles are outright contempt and a small amount of pity for the whole fucking destructive evilness of it all, but I've been told having access to drawings and images and whatnot makes people more likely to act on their impulses.

And like. I don't think images of CSAM in any form, no matter how far removed they are from real people, actually contribute anything worthwhile st all yo the world, so like. I dunno.

Really couldn't give two squirts of piss of about anything that makes a pedophiles life harder. Human garbage.

[–] [email protected] 19 points 1 month ago

As an advocate for online and offline safety of children, I did read into the research. None of the research I've found confirm with any sort of evidence that AI-generated CSAM materials increase risks of other illicit behavior. We need more evidence, and I do recommend to exercise caution with statements, but for the time being, we can rely on the studies in other forms of illegal behaviors and the effects of their decriminalization, which paint a fairly positive picture. Generally, people will tend to opt for what is legal and more readily accessible - and we can make AI CSAM into exactly that.

For now, people are criminalized for the zero-evidence-its-even-bad crime, while I tend to look quite positively on what it can bring on the table instead.

Also, pedophiles are not human trash, and this line of thinking is also harmful, making more of them hide and never get adequate help from a therapist, increasing their chances of offending. Which, well, harms children.

They are regular people who, involuntarily, have their sexuality warped in a way that includes children. They never chose it, they cannot do anything about it in itself, and can only figure out what to do with it going forward. You could be one, I could be one. What matters is the decisions they take based on their sexuality. The correct way is celibacy and refusion of any sources of direct harm towards children, including the consumption of real CSAM. This might be hard on many, and to aid them, we can provide fictional materials so they could let some steam off. Otherwise, many are likely to turn to real CSAM as a source of satisfaction, or even turn to actually abusing children IRL.

[–] [email protected] 4 points 1 month ago (1 children)

This relies on the idea that "outlet" is not harmful. It might be encouraging even but who do you think even would ever study this to help us know here. Can you imagine the scientists who'd have to be leading studies like this - incredibly grim and difficult subject with high likelihood that no one would listen to you anyway.

[–] [email protected] 3 points 1 month ago

IIRC there was actually a study and pedos with access to synthetic CSAM were less likely to victimize real children.

[–] [email protected] 3 points 1 month ago (1 children)

What would stop someone from creating a tool that tagged real images as AI generated?

Have at it with drawings that are easily distinguished, but if anything is photorealistic I feel like it needs to be treated as real.

[–] [email protected] 1 points 1 month ago (1 children)

Some form of digital signatures for allowed services?

Sure, it will limit the choice of where to legally generate content, but it should work.

[–] [email protected] 1 points 1 month ago (1 children)

I highly doubt any commercially available service is going to get in on officially generating photorealistic CSAM.

[–] [email protected] 1 points 1 month ago (1 children)

Open-source models exist and can be forked

[–] [email protected] 1 points 1 month ago (1 children)

...and then we're back at "someone can take that model and tag real images to appear AI-generated."

You would need a closed-source model run server-side in order to prevent that.

[–] [email protected] 1 points 1 month ago

Yep, essentially. But that's for the hyperrealistic one.