this post was submitted on 28 Mar 2025
146 points (100.0% liked)

Ask Lemmy

31668 readers
1068 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

I have noticed that lemmy so far does not have a lot of fake accounts from bots and AI slop at least from what I can tell. I am wondering how the heck do we keep this community free of that kind of stuff as continuous waves of redditors land here and the platform grows.

EDIT a potential solution:

I have an idea where people can flag a post or a user as a bot and if it's found out to be a bot the moderators could have some tool where the bot is essentially shadow banned into an inbox that just gets dumped occasionally. I am thinking this because then people creating the bots might not realize their bot has been banned and try and create replacement bots. This could effectively reduce the amount of bots without bot creators realizing it or know if their bots have been blocked or not. The one thing that would also be needed is a way to request being un-bannned if they get hit as a false positive. these would have to be built into lemmy's moderation tools and I don't know if any of that exists currently.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 1 month ago (2 children)

I was going to make a joke by posting an obviously stupid and AI generated answer, but I'm genuinely surprised by the insightfulness of Gemini's answer:

Dealing with bots and AI-generated "slop" (low-quality, repetitive, or irrelevant content) is an ongoing challenge for any online platform, including Lemmy. Here's a breakdown of strategies and considerations: Challenges:

  • Federation Complexity:
    • Lemmy's federated nature means content comes from various instances, making centralized control difficult.
    • Moderation efforts need to be distributed, which can lead to inconsistencies.
  • AI Advancement:
    • AI-generated content is becoming increasingly sophisticated, making it harder to distinguish from human-created posts.
    • Bots can mimic human behavior, evading simple detection methods.
  • Balancing Freedom and Control:
    • Lemmy values user autonomy and free speech, which can conflict with aggressive moderation.
    • Overly strict measures might stifle legitimate contributions. Strategies:
  • Instance-Level Moderation:
    • Individual Lemmy instances play a crucial role in setting their own moderation policies.
    • Instance administrators can implement tools and rules to filter out unwanted content.
    • Blocking known bot-infested instances is a common practice.
  • Community-Driven Moderation:
    • Empowering users to report suspicious activity and content.
    • Implementing robust reporting mechanisms and clear guidelines.
    • Utilizing community moderators to review and take action on reports.
  • Technical Solutions:
    • Developing algorithms to detect patterns of bot behavior and AI-generated content.
    • Implementing CAPTCHAs or other verification methods to prevent automated accounts.
    • Enhancing Lemmy's software to provide better moderation tools.
  • User Filtering:
    • Lemmy itself has user level settings that allow for the hiding of bots. As shown in the search results, there are settings that can hide bot activity.
  • Community Education:
    • Raising awareness about the dangers of bots and AI slop.
    • Educating users on how to identify and report suspicious content.
  • Development of Lemmy Software:
    • As shown in the github issue, there is development discussion regarding excluding bot comments from sorting algorithms. This type of software development will be key to combating bot influence on the platform. Key Considerations:
  • The fight against bots and AI slop is an ongoing process.
  • A multi-faceted approach is necessary, combining technical solutions, community involvement, and responsible moderation.
  • Maintaining a balance between control and freedom is essential for the long-term health of the Lemmy network.
[–] [email protected] 10 points 1 month ago

I was going to make a joke by posting an obviously stupid and AI generated answer

Non-joke slop is even worse.

[–] [email protected] 8 points 1 month ago

I don't mind you generating an AI answer to gain insight. However it would be better if you took the reply and then gave your own view on it rather than just reposting it.