liliumstar

joined 2 years ago
[–] [email protected] 2 points 2 days ago

Yup, I think it'd work fine, especially if you want the ability to easily inspect individual items.

Any of the popular python yaml libraries will be more than sufficient. With a bit of work, you can marshal the input (when reading files back) into python (data)classes, making it easy to work with.

[–] [email protected] 13 points 2 days ago (11 children)

I would scrape them into individual json files with more info than you think you need, just for the sake of simplicity. Once you have them all, then you can work out an ideal storage solution, probably some kind of SQL DB. Once that is done, you could turn the json files into a .tar.zst and archive it, or just delete them if you are confident in processed representation.

Source: I completed a similar but much larger story site archive and found this to be the easiest way.

[–] [email protected] 3 points 3 days ago

I'm with Azire, they have port forwarding and 10 gig servers. Note they were bought recently by malwarebytes, so it is possible things will change in the future. For the time being, things have been great. I moved from OVPN after myself and others started experiencing persistant failures.

I've been meaning to try out CryptoStorm. If anyone has experience with them please share.

[–] [email protected] 2 points 3 days ago

Congrats! I just got a similar running on Arch with a 5700 XT. When I looked at it a couple years ago, it wasn't really possible. Now, smooth sailing.

[–] [email protected] 3 points 1 week ago

Yeah, you can turn off registration without a token. Then, if you want someone to register you can issue them a registration token, or manually create their account.

Federation can be turned on, on a case by case basis.

You can set rooms to invite only and not discoverable. Alternately, you can use an invite-only space that allows users to join rooms from there.

The first two parts are done in the server config, see the synapse docs. The last is done once the server is setup and running as an admin.

[–] [email protected] 5 points 1 week ago

It is, but doesn't really play well with PTs in general. Not all trackers support it (some of the software is that old), and even when it works I've run into unexpected issues.

I would like it if there was increased adoption.

[–] [email protected] 15 points 1 week ago (5 children)

I don't know of any private trackers who are interested in users in your particular circumstances. The reality is, you can't really seed behind CGNAT. I would really consider shelling out for a VPN, you can get an okay one for 5-10 euro a month. If you're technically inclined, you could even set your own up on a cheap VPS for less, given you don't need fast networking.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

If you have more experience with Linux CLI over powershell, I'd go with that. There are a few options: WSL2, MSYS2, Cygwin.

[–] [email protected] 1 points 2 weeks ago

It makes way more sense to implement an auth cooldown over increasing the server load for a single action. I can't speak on the ideal settings for Argon2id, but I like to think the defaults are fine in most cases.

[–] [email protected] 3 points 2 weeks ago

It is possible to tonemap DV to SDR, and I think to static HDR as well. Look into madvr and/or mpv. Both should be able to provide real-time tonemapping during playback. For reference, these pink/green videos would be DV Profile 5 (P5). I've heard the results are not great, so I would stick with P8 hybrid releases.

[–] [email protected] 2 points 3 weeks ago

To start small setup a static website behind nginx. This requires you to create a basic website or copy a template, it goes somewhere in your filesystem, in linux /var/www is common. Once you have that, setup the nginx service and point it to that location. You can do this locally then expose it to the net or put on a VPS. Here is a dead simple guide presuming you have a remote server: https://dev.to/starcc/how-to-deploy-a-simple-website-with-nginx-a-comically-easy-guide-202g

Once you have that covered, ensure you know how to setup ssh keys and such, then install, configure, and run services. From there, most things are easy outside of overly complicated configurations.

[–] [email protected] 4 points 4 weeks ago

Whether you like it or not, that's more or less what happens. You can/will lose a bunch of accounts for causing trouble. Sometimes I think it's a bit over the top. Instead of keeping out toxic or non-contributing folks it becomes a personal vendetta or innocent violation.

Overall, I'm a fan of banning known bad users, but restraint should be used and collected personal information should be minimized.

 

I've been working on this subtitle archive project for some time. It is a Postgres database along with a CLI and API application allowing you to easily extract the subs you want. It is primarily intended for encoders or people with large libraries, but anyone can use it!

PGSub is composed from three dumps:

  • opensubtitles.org.Actually.Open.Edition.2022.07.25
  • Subscene V2 (prior to shutdown)
  • Gnome's Hut of Subs (as of 2024-04)

As such, it is a good resource for films and series up to around 2022.

Some stats (copied from README):

  • Out of 9,503,730 files originally obtained from dumps, 9,500,355 (99.96%) were inserted into the database.
  • Out of the 9,500,355 inserted, 8,389,369 (88.31%) are matched with a film or series.
  • There are 154,737 unique films or series represented, though note the lines get a bit hazy when considering TV movies, specials, and so forth. 133,780 are films, 20,957 are series.
  • 93 languages are represented, with a special '00' language indicating a .mks file with multiple languages present.
  • 55% of matched items have a FPS value present.

Once imported, the recommended way to access it is via the CLI application. The CLI and API can be compiled on Windows and Linux (and maybe Mac), and there also pre-built binaries available.

The database dump is distributed via torrent (if it doesn't work for you, let me know), which you can find in the repo. It is ~243 GiB compressed, and uses a little under 300 GiB of table space once imported.

For a limited time I will devote some resources to bug-fixing the applications, or perhaps adding some small QoL improvements. But, of course, you can always fork them or make or own if they don't suit you.

view more: next ›