this post was submitted on 20 Mar 2024
414 points (100.0% liked)

Technology

68244 readers
5099 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 24 comments
sorted by: hot top controversial new old
[–] AmidFuror@fedia.io 120 points 1 year ago (3 children)

Will this apply to advertisers, too? They don't block outright scams, so probably not. Money absolves all sins.

[–] Thorny_Insight@lemm.ee 50 points 1 year ago (1 children)

Your YouTube is not working optimally if you're seeing ads there

[–] AmidFuror@fedia.io 17 points 1 year ago

My point was that ads are a big part of the typical user's experience, and it is hypocritical to believe AI needs to be disclosed but not apply that to paid content.

[–] LordCrom@lemmy.world 7 points 1 year ago

What? Didnt you know the government is giving away 6400.00 to everybody if you but only claim it by filling out this form on my sketchy website with all your personal info....?

[–] Dudewitbow@lemmy.zip 7 points 1 year ago

tbf, a lot of ads are already misleading as it is, so pointing out AI isnt going to change its perception much.

[–] RustyNova@lemmy.world 86 points 1 year ago (2 children)

That's a win, but it would need to be enforced... Which is harder to do

[–] Uvine_Umbra@discuss.tchncs.de 24 points 1 year ago* (last edited 1 year ago) (1 children)

Harder, but in this with mutliple generations of people being trained to question every link and image on screen? Not necessarily impossible.

People will report this for sure if they feel confident.

There will definitely be false flags though

[–] Vilian@lemmy.ca 3 points 1 year ago (1 children)

this only gonna train i.a to not look like i.a

[–] Uvine_Umbra@discuss.tchncs.de 1 points 1 year ago

... They'd progress that way regardless so...

[–] Sabata11792@kbin.social 10 points 1 year ago

I'm waiting for the constant big drama when it turns out Big Popular Youtuber of the Week gets accused of using/not using Ai and it turns out the oppsite is true.

[–] FaceDeer@fedia.io 46 points 1 year ago

None of this is AI-specific. Youtube wants you to label your videos if you use "altered or synthetic content" that could mislead people about real people or events. 99% of what Corridor Crew puts out would probably need to be labeled, for example, and they mostly use traditional digital effects.

[–] jeena@jemmy.jeena.net 45 points 1 year ago (1 children)

That's good, but soon every video will partially be AI because it'll be build in into the tools. Just like every photo out there is retouched with Lightroom/Photoshop.

[–] pdxfed@lemmy.world 2 points 1 year ago

To your point, Samsung's CEO said there is no such thing as a real photo when they were criticized for highly adjusting pictures on some of their newer cameras a year or two ago. Google's phones have had lighting and other effects that are helpful and make great improvements to shots(fixing lighting, removing photobombs, etc) that most people wouldn't say is AI but that's exactly what it is.

[–] redcalcium@lemmy.institute 38 points 1 year ago (3 children)

Creators must disclose content that:

Makes a real person appear to say or do something they didn’t do

Alters footage of a real event or place

Generates a realistic-looking scene that didn’t actually occur

So, they want deepfakes to be clearly labeled, but if the entire video was scripted by chatgpt, the AI label is not required?

[–] GamingChairModel@lemmy.world 36 points 1 year ago (3 children)

Generates a realistic-looking scene that didn’t actually occur

Doesn't this describe, like, every mainstream live action film or television show?

[–] Hexagon@feddit.it 11 points 1 year ago (1 children)

Technically, yes... but if it's in movie/show, you already know it's fiction

[–] Usernameblankface@lemmy.world 11 points 1 year ago

Bold of you to assume that everyone knows movies and shows aren't real.

[–] FaceDeer@kbin.social 3 points 1 year ago

Yeah, but this doesn't put any restrictions on stuff, it just adds a label to it.

[–] essteeyou@lemmy.world 1 points 1 year ago

Also a lot of video game footage from livestreams, etc.

[–] affiliate@lemmy.world 6 points 1 year ago

this is going to be devastating for all the prank youtube channels

[–] HarkMahlberg@kbin.social 1 points 1 year ago

Wouldn't this enable, for example, Trump claiming he didn't make the "bloodbath" comment, calling it a deepfake, and telling Youtube to remove all the new coverage of it? I mean, more generally, what stops someone from abusing this system?

[–] FiniteBanjo@lemmy.today 4 points 1 year ago

I'm sure that given the 99.99% ethical nature of AI enthusiasts and users that they will absolutely comply with this voluntary identification! /sarcasm

[–] CluckN@lemmy.world 2 points 1 year ago (1 children)

It’s a good first step. If claiming your AI video is real gets more views then I’m curious if the risks outweigh the cost of being caught.

[–] canis_majoris@lemmy.ca 2 points 1 year ago

You can only really pull that with older people and children. Most of us millennials can spot the patterns AI gen produces, but I've seen my dad just consume the content and be largely unaware of the fact that it was artificially generated. He constantly complains those videos say nothing but watches tons of them anyways, mostly related to non-news about sports.