14specks

joined 3 years ago
MODERATOR OF
[–] [email protected] 1 points 2 years ago

Not in the US there isn't

[–] [email protected] 1 points 2 years ago

I see, I knew that person had a huge bone to pick with the Lemmy devs over their personal politics (nearly irrelevant on a federated platform imo), so I didn't know if it was along the same lines.

[–] [email protected] 3 points 2 years ago (2 children)

Is that the same person who runs the FediTips Mastodon?

[–] [email protected] 1 points 2 years ago

Ah, makes sense. Yeah it still exists on my instance. Hopefully the underlying bug gets resolved, since it's now wasted both of our time :)

[–] [email protected] 1 points 2 years ago (2 children)

That comment did not start the conversation, it was a response to the comment above it, which directly mentions glass.

[–] [email protected] 3 points 2 years ago (1 children)

They're gonna move away from we sockets within a couple of weeks, from what I hear

[–] [email protected] 4 points 2 years ago (1 children)

Do they expect lemmy.world admins to police 25K people across 700+ servers? I don’t think that’s how it works.

I think that the expectation is that lemmy.world needs to show a good faith effort to moderate users coming from their instance. I'm not sure whether they have or have not done this, since it is not my home instance.

[–] [email protected] 10 points 2 years ago

It was a mistake (though natural) to trust Reddit with all that knowledge in the first place.

[–] [email protected] 1 points 2 years ago (1 children)

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.

Between this and the "deep fake" tech I'm kinda hoping for a light Butlerian jihad that gets everyone to log tf off and exist in the real world, but that's kind of a hot take

[–] [email protected] 2 points 2 years ago (3 children)

I think what sites have been running into is that it's difficult to tell what is and is not AI-generated, so enforcement of a ban is difficult. Some would say that it's better to have an AI-generated response out there in the open, which can then be verified and prioritized appropriately from user feedback. If there's a human generated response that's higher.quality, then that should win anyway, right? (Idk tbh)

[–] [email protected] 0 points 2 years ago (1 children)

The whole point is that it's federated too. The devs haven't really been ardent about much to do with the software except "we're not getting rid of the hard coded slur filter cause we really don't want to see the open fascists who would care about that using it". I don't fully agree or disagree with that, but they don't really have much else to say about the community at large. Des has repeatedly stated that he wants to have a healthy amount of "mainstream"/"liberal" instances that he has nothing to do with the content of.

view more: next ›