glizzyguzzler

joined 2 years ago
[–] [email protected] 5 points 1 week ago (2 children)

I was hoping the distros would just do the scrub/balance work for you - makes it no effort then! Good to know OpenSUSE does it for ya. Searching it looks like Fedora doesn’t have anything built in sadly, but the posts are +1 yr old so maaaybe they’ve done something.

[–] [email protected] 10 points 1 week ago (4 children)

It’s great for single drive, raid 0, and raid 1. Don’t use it for more raid, it is not acceptable for that (raid 10 obv ok). It still can lose data for raid 5/6 still.

I’m not sure of the tools that Fedora includes to manage BTRFS but these scripts are great https://github.com/kdave/btrfsmaintenance you use them to scrub and balance. Balance is for redistributing blocks and scrub checks if bits have unexpectedly changed due to bit rot (hardware issue or cosmic ray). Scrub weekly for essential photos, important docs, and the like. Monthly for everything else. Balance monthly, or on demand if free drive space is tight and you want a bit more bits.

RAID 1 will give you bit rot detection with scrub and self-recover said bit rot detection (assuming both drives don’t mystically have the same bit flip, which is very unlikely). Single drive will just detect.

BTRFS snapshot then send/receive is excellent for a quick backup.

Remember that a BTRFS snapshot will keep all files in the snapshot, even if you delete them off the live drive. Delete 500 GB of stuff, but the space didn’t reduce? Probably a snapshot is remembering that 500 GB. Delete the snapshot and your space is back.

You can make sub volumes inside a BTRFS volume, which are basically folders but you can snapshot just them. Useful for scrubbing your essential docs folder more often than everything else, or snapshotting more often too.

Lastly, you can disable copy-on-write (cow) for volumes. Reduces their safety but increases write speed, good for caches and I’ve read VM drive images need it for performance.

Overall, great. Built-in and no need to muck with ZFS’s extra install steps, but you get the benefits ZFS has (as long as you’re ok to be limited to RAID 1)

[–] [email protected] 2 points 3 weeks ago

Odd, I’ll try to deploy this when I can and see!

I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!

[–] [email protected] 2 points 4 weeks ago

That’s pretty damn clever

[–] [email protected] 4 points 4 weeks ago* (last edited 4 weeks ago) (2 children)

I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with read_only being one of them: https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

With how simple it is I guessed that running as a userand restricting cap_drop: all wouldn’t be a problem.

For read_only many containers just need tmpfs: /tmp in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applying read_only easier.

So again, I’d abs use it with read_only when you get the time to tune it!!

[–] [email protected] 5 points 1 month ago (4 children)

Looks awesome and very efficient, does it also run with read_only: true (with a db volume provided, of course!)? Many containers just need a /tmp, but not always

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago)

I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose. The --read-data-subset also does the structural integrity while also checking an amount of data. If I had more bandwidth, I'd check more.

When I set up a new repo, I restore some stuff to make sure it's there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'.

You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn't lazy I guess, just to make sure I'm not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.

Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'.

[–] [email protected] 4 points 1 month ago (1 children)

I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.

I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.

I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.

[–] [email protected] 4 points 1 month ago (1 children)

I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

[–] [email protected] 4 points 1 month ago

I wish too for an in-depth blog post, but the github answer is at least succinct enough

[–] [email protected] 10 points 1 month ago* (last edited 1 month ago) (2 children)

This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).

So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!

 
[–] [email protected] 15 points 1 month ago

This is shit, I looked at the EU limits on cadmium/lead per the lab reports https://gmoscience.org/wp-content/uploads/2025/01/GSC-HeavyMetalsReports.pdf and EU limits https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32023R0915 (mg/kg == ppm, ug/kg == ppb) and their heavy metal amounts are very low.

For the aluminum the EU recommends 1 mg/kg per week on avg - but this EU report makes clear that ~10 mg/kg in baked goods is the norm https://efsa.onlinelibrary.wiley.com/doi/epdf/10.2903/j.efsa.2008.754 . So that’s even fine.

I don’t care to go into the pesticides but since the metal levels are good to fine but presented as horrendous, I would suspect the pesticide levels are overinflated as well.

1
rule (files.catbox.moe)
 
 

I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

With Docker, I can't see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can't do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

Is there a way I've missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to this point.

Also, if someone knows how to match Incus' networking capability with Podman, I would love to hear that. I'm eyeing trying to move to Podman Quadlets (with Debian 13) after I've got myself well-versed with Docker (and its vast support infrastructure to learn from).

Hoping someone has solved this and wants to share their powers. I can always put a Docker/podman inside of an Incus container, but I'd like to avoid onioning if possible.

1
butts rule (files.catbox.moe)
 
1
rule (files.catbox.moe)
1
tithe rule (files.catbox.moe)
 
1
rule (files.catbox.moe)
 
 

Context is:

  • I was luckily banned from the fallen onehundredninetysix for vehemently rejecting the orchestrated hoodwinking

  • luckily banned because i'd have posted boston's sloppiest there like three times before it properly made it to the people's onehundredninetysix

  • I use the default web UI which is aggressively broken on my old phone like the pleb I am

 
1
hiberuletion (files.catbox.moe)
 
1
rule (files.catbox.moe)
 
1
rule (lemmy.blahaj.zone)
 
view more: next ›