this post was submitted on 14 Feb 2024
1079 points (100.0% liked)

Technology

68244 readers
3774 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 year ago (3 children)

robots.txt is purely textual; you can't run JavaScript or log anything. Plus, one who doesn't intend to follow robots.txt wouldn't query it.

[–] [email protected] 55 points 1 year ago (2 children)

If it doesn't get queried that's the fault of the webscraper. You don't need JS built into the robots.txt file either. Just add some line like:

here-there-be-dragons.html

Any client that hits that page (and maybe doesn't pass a captcha check) gets banned. Or even better, they get a long stream of nonsense.

[–] [email protected] 24 points 1 year ago (2 children)

server {

name herebedragons.example.com; root /dev/random;

}

[–] [email protected] 15 points 1 year ago (1 children)

Nice idea! Better use /dev/urandom through, as that is non blocking. See here.

[–] [email protected] 1 points 1 year ago

That was really interesting. I always used urandom by practice and wondered what the difference was.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago)

I wonder if Nginx would just load random into memory until the kernel OOM kills it.

[–] [email protected] 10 points 1 year ago

I actually love the data-poisoning approach. I think that sort of strategy is going to be an unfortunately necessary part of the future of the web.

[–] [email protected] 15 points 1 year ago (1 children)

You're second point is a good one, but you absolutely can log the IP which requested robots.txt. That's just a standard part of any http server ever, no JavaScript needed.

[–] [email protected] 10 points 1 year ago

You'd probably have to go out of your way to avoid logging this. I've always seen such logs enabled by default when setting up web servers.

[–] [email protected] 12 points 1 year ago

People not intending to follow it is the real reason not to bother, but it's trivial to track who downloaded the file and then hit something they were asked not to.

Like, 10 minutes work to do right. You don't need js to do it at all.