this post was submitted on 03 Feb 2025
932 points (100.0% liked)

Technology

69726 readers
3736 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Originality.AI looked at 8,885 long Facebook posts made over the past six years.

Key Findings

  • 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
  • Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
  • This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 5 points 3 months ago* (last edited 3 months ago) (3 children)

Anyone on Facebook deserves to be shit on by sloppy. They also deserve scanned out of all of the money and anything else.

If you’re on Facebook, you deserve this. Get the hell off Facebook.

Edit: itt: brain, dead, and fascist apologist Facebook Earth, who just refuse to accept that their platform is one of the biggest advent of Nazi fascism in this country, and they are all 100% complicit.

[–] [email protected] 4 points 3 months ago

Have you ever successfully berated a stranger into doing what you wanted them to do?

load more comments (1 replies)
[–] [email protected] 4 points 3 months ago (1 children)

That’s an extremely low sample size for this

[–] [email protected] 5 points 3 months ago (1 children)

8,855 long-form Facebook posts from various users using a 3rd party. The dataset spans from 2018 to November 2024, with a minimum of 100 posts per month, each containing at least 100 words.

seems like thats a good baseline rule and that was about the total number that matched it

[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

With apparently 3 billion active users

Only summing up 9k posts over a 6 year stretch with over 100 words feels like an outreach problem. Conclusion could be drawn that bots have better reach

[–] [email protected] 2 points 3 months ago (1 children)

each post has to be 100 words with at least 100 posts a month

how many actual users do that?

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

I have no idea because I don’t use the site

But to say less than 0.0001% just seems hard to believe

[–] [email protected] 3 points 3 months ago (1 children)

I don't use the site either but 100 words is a lot for a facebook post

[–] [email protected] 3 points 3 months ago (1 children)

If you could reliably detect "AI" using an "AI" you could also use an "AI" to make posts that the other "AI" couldn't detect.

[–] [email protected] 5 points 3 months ago (1 children)

Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.

[–] [email protected] 2 points 3 months ago (1 children)

I see no reason why "post right wing propaganda" and "write so you don't sound like "AI" " should be conflicting goals.

The actual argument why I don't find such results credible is that the "creator" is trained to sound like humans, so the "detector" has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.

To be able to find the "AI" content, the "detector" would have to be better at deciding what sounds like a human than the "creator". So for the results to have any kind of accuracy, you're already banking on the "detector" company having more processing power / better training data / more money than, say, OpenAI or google.

But also, if the "detector" was better at the job, it could be used as a better "creator" itself. Then, how would we distinguish the content it created?

[–] [email protected] 1 points 3 months ago

I'm not necessarily saying they're conflicting goals, merely that they're not the same goal.

The incentive for the generator becomes "generate propaganda that doesn't have the language chatacteristics of typical LLMs", so the incentive is split between those goals. As a simplified example, if the additional incentive were "include the word bamboo in every response", I think we would both agree that it would do a worse job at its original goal, since the constraint means that outputs that would have been optimal previously are now considered poor responses.

Meanwhile, the detector network has a far simpler task - given some input string, give back a value representing the confidence it was output by a system rather than a person.

I think it's also worth considering that LLMs don't "think" in the same way people do - where people construct an abstract thought, then find the best combinations of words to express that thought, an LLM generates words that are likely to follow the preceding ones (including prompts). This does leave some space for detecting these different approaches better than at random, even though it's impossible to do so reliably.

But I guess really the important thing is that people running these bots don't really care if it's possible to find that the content is likely generated, just so long as it's not so obvious that the content gets removed. This means they're not really incentivised to spend money training models to avoid detection.

[–] [email protected] 3 points 3 months ago (1 children)

Probably on par with the junk human users are posting

[–] [email protected] 2 points 3 months ago

Hmm, "the junk human users are posting", or "the human junk users are posting"? We are talking about Facebook here, after all.

[–] [email protected] 2 points 3 months ago

Deleted my account a little while ago but for my feed I think it was higher. You couldn't block them fast enough, and mostly obviously AI pictures that if the comments are to be believed as being actual humans...people believed were real. It was a total nightmare land. I'm sad that I have now lost contact with the few distant friends I had on there but otherwise NOTHING lost.

[–] [email protected] 2 points 3 months ago

Take note this does not appear to be an independent study. Tell me I'm wrong?

[–] [email protected] 2 points 3 months ago

Not my Annie! No! Not my Annie!

[–] [email protected] 2 points 3 months ago (9 children)

how tf did it take 6 years to analyze 8000 posts

load more comments (9 replies)
[–] [email protected] 2 points 3 months ago (1 children)

and, is the jury already in on which ai is most fuckable?

[–] [email protected] 3 points 3 months ago (1 children)

I'd tell you, but my area network appears to have already started blocking DeepSeek.

[–] [email protected] 2 points 3 months ago (2 children)
[–] [email protected] 2 points 3 months ago

According to Wiz, DeepSeek promptly fixed the issue when informed about it.

:-/

load more comments (1 replies)
load more comments
view more: ‹ prev next ›