Maroon

joined 1 year ago
 

I love the fact that fediverse was built from the ground up to be free, federated and interoperable. I have two questions that may come from my lack of expertise / knowledge, so I apologise in advance if they are dumb.

  1. Bots can disrupt smaller instances:

What is stopping corpos from scraping everyone's posts and stuff from the fediverse and train their AI? What's stopping them then, to create loads of not accounts and spam / disrupt smaller communities? When an instances quality drops, the users may be more incentivised to migrate to bigger instances and go there. It's safe to say most Lemmy users are not going to spin their own instance and start communities from scratch. Meanwhile, the onslaught of bots can overwhelm these budding communities and instances.

  1. Corpos can flood the fediverse with ads and crap:

Threads comes to mind on this point and how many instances have chosen not to defederate with them. Besides, they can create bridges, and have repost bots in all instances to flood major them with ads. With generative content, it is so much easier to make a seemingly casual post about a product and mask it as an advertisement.

I've seen previous posts about people wanting to come because of their opinion about how certain countries behave. I feel the true evil are the corporates.

1
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/[email protected]
 

The Firefox ToC discussion pushed me down the browser engine rabbit hole (again). Have you had a chance to daily drive some really good but obscure web engine that is not Gecko (Firefox), WebKit (Apple) and Blink (Chromium)? How viable is it for a complete switch - this includes banking, chatting, logging into websites, etc.

Edit: Added link to the Firefox discussion to give better context to my question.

 
 

cross-posted from: https://lemmy.world/post/25519139

Gut health gone wild

 
 

After dabbling in the world of LLM poisoning, I realised that I simply do not have the skill set (or brain power) to effectively poison LLM web scrapers.

I am trying to work with what I know /understand. I have fail2ban installed in my static webserver. Is it possible now to get a massive list of known IP addresses that scrape websites and add that to the ban list?

 

I came across tools like nightshade that can poison images. That way, if someone steals an artist's work to train their AI, it learns the wrong stuff and can potentially begin spewing gibberish.

Is there something that I can use on PDFs? There are two scenarios for me:

  1. Content that I already created that is available as a pdf.
  2. I use LaTeX to make new documents and I want to poison those from scratch if possible rather than an ad hoc step once the PDF is created.
 
 

I am an EU citizen and I have heard about privacy.com for virtual cards. As I understand it is only for those US bank accounts and Credit Union accounts. Are similar services available for EU citizens where we can get disposable virtual cards?

 
 

I visit sites by Wiley, Elsevier, and Taylor and Francis a lot recently because I am trying out to do research in a specific topic.

Despite using uBlock, I find that some ads creep through. Also, they have trackers everywhere. How do I go about identifying their trackers?

1
submitted 2 months ago* (last edited 2 months ago) by [email protected] to c/[email protected]
 

Is there was a software (preferably on Linux) where I can drag and drop to quickly make a website with HTML and CSS and export the resulting code?

I know of a lot of online site that charge a lot of money for this, but I was hoping that an open source software exists for this.

P.S: I want to make a simple static personal website. Possible have a link from where they can download PDF samples of my writing/ literature / creative work.

view more: next ›