this post was submitted on 30 Jul 2024
171 points (100.0% liked)

Selfhosted

46291 readers
375 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 63 points 8 months ago (7 children)

The big thing for #2 would be to seperate out what you actually need vs what people keep recommending.

General guidance is useful, but there's a lot of 'You need ZFS!' and 'You should use K8s!' and 'Use X software!'

My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more. I ended up doing MergerFS with a proper offsite backup method because, well, it's shockingly low-complexity.

And I ended up doing Docker with a bunch of compose files and bind mounts, because it's shockingly low-complexity. And it's just running on Debian, instead of some OS that has a couple of layers of additional software to make things "easier" because, again, it's low-complexity.

I can re-deploy the entire stack on new hardware in about ~10 minutes (I've tested this a few times just to make sure my backup scripts work), and there's basically zero vendor tie-in or dependencies that you'd have to get working first since it's just a pile of tarballs and packages from the distro's package manager on, well, ANY distro.

[–] [email protected] 4 points 8 months ago (1 children)

I have made that migration myself going from a Raspberry PI 4 to a n100 based NAS. It was 10 minutes for the software stack as you said This not taking into account media migration which was done on the background over a few hours on WiFi (I had everything on an external hard drive at the time).

That last part is the only thing I would change about my self hosting solution. Yes, the NAS has a nice form factor, is power efficient and has so far been very optimal for my needs (no lag like rpi4), however I have seen they don’t really sell motherboard or parts to repair them. They want you to replace it with another one. Reason 2 on the same is vendor lock in. Depending on the options you select when creating the storage groups/pools (whatever they are called), you could be stuck needing to get something from the same vendor to read your data if the device stops working but the disks are salvageable. Reason 3 is they’ve had security incidents so a lot of the “features” I would not recommend using ever to avoid exposing your data to ransomware over the internet. I don’t trust their competitors either. I know how commercial software is made with the smallest amount of care for security best practices.

[–] [email protected] 3 points 8 months ago (1 children)

Yeah, I just use plain boring desktop hardware. (Oh no! I'm experiencing data corruption due to the lack of ECC!) It's cheap, it's available, it's trivial to upgrade and expand, and there's very few little gotchas in there: you get pretty much exactly what it looks like you get.

Also nice is that that you can have a Ship of Theseus NAS by upgrading what needs upgrading as you go along and aren't tied into entire platform swaps unless it makes sense - my last big rebuild was 3 years ago, but this is basically a 10 year old NAS at this point.

load more comments (1 replies)
[–] [email protected] 3 points 8 months ago (5 children)

btrfs with its send/receive (incremental fs-level backups) is already stable enough for mostly everything (just has some issues with raid 5/6), and is much more performant than zfs. And it is also in the linux kernel tree (quite hugely useful). Of course, if more zfs-like functionality is what you look for.

[–] [email protected] 7 points 8 months ago (2 children)

"Already stable enough"

  1. no it isn't.
  2. if fucking should be, it's been around 15 years!
[–] [email protected] 3 points 8 months ago* (last edited 8 months ago)

My only experience with btrfs was when trying out Opensuse Tumbleweed. Within a couple days my home partition was busted, next time it was another partition. No idea if the problems could be fixed as these were fairly new installations to give Opensuse a try and I couldn't be bothered to fix a system that's troubling me from the very beginning.

Between all the options that just work (TM), btrfs is the one I've learned to stay away from.

EDIT: that was four or five years ago

load more comments (1 replies)
[–] [email protected] 4 points 8 months ago

Honestly it's not; BTRFS has been in my 'that's neat, but it's still got a non-zero chance of deciding to light everything on fire because it's bored' list for, uh, a decade now?

The NAS build is old enough to more or less predate BTRFS being usable (closing in on a decade since I did the initial OS install, jeez) and none of the features matter for what I'm storing: if every drive in my NAS died today, I'd be very annoyed for a couple of hours during the rebuild, and would lose terrabytes of linux ISOs that I can just download again, if I wanted to use Jellyfin to install them a 2nd time. (Any data I care about is pulled offsite at least once a day, so I've got pretty comprehensive backups minus the ISOs.)

I know EXT4 and mergerfs and snapraid are not cool, or have shiny features, but I've also had zero problems with them over the last decade, even between Ubuntu upgrades (16.04, 18.04, 20.04, 22.04) and hardware platform upgrades (6600k, 8700k, 10950k) and the entire replacement of all the system drives (hdd -> ssd -> nvme) and the expansion of and replacement of dead HDDs, of varying sizes (4tb drives to 8tb drives to 16tb drives to some 20tb drives).

It all just... worked, and at no point was I concerned about the filesystem not working if I replaced or upgraded or changed something, which is not something ZFS or BTRFS would have guaranteed during that same time window.

load more comments (3 replies)
load more comments (4 replies)
[–] [email protected] 52 points 8 months ago (2 children)
  • you do not need kubernetes
  • you do not need anything to be „high availability”, that just adds a ton of complexity for no benefit. Nobody will die or go broke if your homelab is down for a few days.
  • tailscale is awesome
  • docker-compose is awesome
  • irreplaceable data gets one offsite backup, one local backup, and ideally one normally offline backup (in case you get ransomwared)
  • yubikeys are cool and surprisingly easy to use
  • don’t offer your services to other people until you are sure you can support it, your backups are squared away, and you are happy with how things are set up.
[–] [email protected] 20 points 8 months ago* (last edited 8 months ago) (17 children)

To piggy back on your “You don’t need k8s or high availability”,

If you want to optimize your setup in a way that’s actually beneficial on the small, self hosted scale, then what you should aim for is reproducibility. Docker compose, Ansible, NixOS, whatever your pleasure. The ability to quickly take your entire environment from one box and move it to another, either because you’re switching cloud providers or got a nicer hardware box from a garage sale.

When Linode was acquired by Akamai and subsequently renamed, I moved all my cloud containers to Vultr by rsyncing the folder structure to the new VM over SSH, then running the compose file on the new server. The entire migration short of changing DNS records took like 5 minutes of hands-on time.

[–] [email protected] 3 points 8 months ago (1 children)

Ansible is so simple yet so elegant.

load more comments (1 replies)
load more comments (16 replies)
[–] [email protected] 3 points 8 months ago (3 children)

Not needing Kubernetes is a broad statement. It allows for better management of storage and literally gives you a configurable reverse-proxy configured with YAML if you know what you're doing.

[–] [email protected] 9 points 8 months ago (3 children)

Yes, but you don't need Kubernetes from the start.

load more comments (3 replies)
load more comments (2 replies)
[–] [email protected] 46 points 8 months ago (1 children)

I wish I knew not to trust closed source self-hosted applications, such as Plex. Would have saved a lot of time and money.

[–] [email protected] 9 points 8 months ago* (last edited 8 months ago) (1 children)
[–] [email protected] 36 points 8 months ago* (last edited 8 months ago) (2 children)

Plex is a great example here. I've been Hetzner customer for many many years, and bought a lifetime license to Plex. Only to receive few months later a notification from Plex that I am no longer allowed to self-host Plex for myself(and only myself) at Hetzner and that they will block all access to my self-hosted Plex instance. I tried to ask for leniency or a refund, but that was wasted effort as well.

In short, I was caught on a crossfire when for-profit company tried to please hollywood by attempting to reduce piracy, so they could get new VC funding.

...

I am now a happy Jellyfin user and warmly recommend all Plex users to try it, the Jellyfin community is awesome!

(Use your favourite search engine to look up "Hetzner Plex ban" for more details)

[–] [email protected] 9 points 8 months ago (1 children)

@zutto @warlaan Searching about, this was Plex banning the use of Plex on Hetzner's IP block, right? Not a decision made by Hetzner?

[–] [email protected] 11 points 8 months ago* (last edited 8 months ago)

Yes, correct.

I apologize if someone misunderstood my reply, Plex was the bad actor here.

[–] [email protected] 4 points 8 months ago (1 children)

Are you still on Hetzner? How's their customer support in general?

[–] [email protected] 5 points 8 months ago

Still with Hetzner yeah. Haven't had to deal with Hetzner customer support in the recent years at all, but they have been great in the past.

[–] [email protected] 35 points 8 months ago

It is much easier to buy one "hefty" physical machine and run ProxMox with virtual machines for servers than it is to run multiple Raspberry Pis. After living that life for years, I'm a ProxMox shill now. Backups are important (read the other comments), and ProxMox makes backup/restore easy. Because eventually you will fuck a server up beyond repair, you will lose data, and you will feel terrible about it. Learn from my mistakes.

[–] [email protected] 32 points 8 months ago (1 children)

My reason for self hosting is being in control of my shit, and not the cloud provider.

I run jellyfin, soulseek, freshRSS, audiobookshelf and nextcloud. All of that on a pi 4 with an SSD attached and then accessible via wireguard. Also that sad is accessible as nfs share.

As I had already known Linux very well before I've started my own cloud, I didn't really had to learn much.

The biggest resource I could recommend is that GitHub repository where a huge amount of awesomely selfhosted solutions are linked.

[–] [email protected] 27 points 8 months ago (1 children)
[–] [email protected] 4 points 8 months ago

Yes that one, thanks.

[–] [email protected] 19 points 8 months ago* (last edited 8 months ago) (6 children)

2.What do you wish you would have known as a beginner starting out?

Caddy. Once you try Caddy there's no turning back to Nginx or Apache.

[–] [email protected] 16 points 8 months ago (7 children)

That's what everyone thinks for a while, and then they go back to Nginx.

[–] [email protected] 4 points 8 months ago

Eh, my main reason for switching is that Caddy builds in LetsEncrypt. My Caddyfile is really simple, it's just a reverse proxy that handles TLS and proxies regular HTTP to my services. I don't have it serving any files or really knowing anything about the services. Here's my setup:

  1. HAProxy - directs subdomains to devices (in VPN) based on SNI
  2. Caddy - manages TLS and LetsEncrypt and communicates w/ services over HTTP
  3. Nginx - serves files for things like NextCloud, if needed (most services have their own HTTP server)

Each of these are separate Docker containers, which makes it really easy to manage and diagnose problems. The syntax for Nginx is more complex for 1&2, and the performance benefit of managing it all in one service just isn't relevant for a self-hosted system, so I use this layered approach that makes each level as simple as possible.

[–] [email protected] 3 points 8 months ago (1 children)

As someone who just learned about Caddy, could you elaborate?

[–] [email protected] 5 points 8 months ago* (last edited 8 months ago) (8 children)

You usually want less integration, not more. Simple self-contained things. Nginx is good at that. That's also why you don't want to use Nginx Proxy Manager or Certbot's Nginx integration etc. It first looks like they make it easier, but there is too much hidden complexity under the hood.

Also, sooner or later you will run into some software that you would really like to try, which is only documented for Nginx and uses some sort of image caching or so, that is hard to replicate with Caddy etc.

load more comments (8 replies)
load more comments (5 replies)
[–] [email protected] 3 points 8 months ago

Apparently traefik might be better if you run docker compose and such, as it does auto-discovery, which reduces the amount of manual configuration required.

load more comments (4 replies)
[–] [email protected] 17 points 8 months ago

I'll parrot the top reply from Reddit on that one: to me, self hosting starts as a learning journey. There's no right or wrong way, if anything I intentionally do whacky weird things to test the limits of my knowledge. The mistakes and troubles are when you learn. You don't really understand the significance of good backups until you had to restore from them.

Even in production, it differs wildly. I have customers whom I set up a bare metal Ubuntu in some datacenter for cheap, they've been running on that setup for 10 years. Small mom and pop shop, they will never need a whole cluster of machines. Then at my day job we're looking at things like Kubernetes and very heavyweight stacks because we handle a lot of traffic.

Some people self-host a PiHole on a Raspberry Pi and that's all they need. Some people have entire NAS setups with smart TVs accessing their Plex/Jellyfin servers for the whole extended family. I host my own emails, which is a pain in the ass to get working reliably and clean your IP reputation.

I guess the only thing you should know is, you need some time to commit to maintaining your stuff if you don't want it to break or get breached (if exposed to the Internet), and a willingness to learn because self hosting isn't a turnkey experience. It can be a turnkey installation but when your SD card/drives fails you're still on your own to troubleshoot and fix it. You don't set a NextCloud server to replace Google Drive with the expectation that you shove the server in a closet forever. Owning your infrastructure and data comes at a small but very important upkeep time investment.

[–] [email protected] 8 points 8 months ago (2 children)
  1. I've learned a number of tools I'd never used before, and refreshed my skills from when I used to be a sysadmin back in college. I can also do things other people don't loudly recommend, but fit my style (Proxmox + Puppet for VMs), which is nice. If you have the right skills, it's arbitrarily flexible.

  2. What electricity costs in my area. $0.32/KWh at the wrong time of day. Pricier hardware could have saved me money in the long run. Bigger drives could also mean fewer, and thus less power consumption.

  3. Google, selfhosting communities like this one, and tutorial-oriented YouTubers like NetworkChuck. Get ideas from people, learn enough to make it happen, then tweak it so you understand it. Repeat, and you'll eventually know a lot.

load more comments (2 replies)
[–] [email protected] 7 points 8 months ago (1 children)

I would've wished

  • don't rush things into production.
  • dont offer a service to a friend without really knowing and having the experience to keep it up when needed.
  • dont make it your life. The services are there to help you, not to be your life.
  • use docker. Podman is not yet ready for mainstream, in my experience. When the services move to podman officially it's time to move. Just because jellyfin offers official documentation for it, doesn't mean it'll work with podman (my experience)
  • just test all services with the base docker install. If something isn't working, there may be a bug or two. Report if it is a bug. Hunt a bug down if you can. maybe it's just something that isn't documented (well enough) for a beginner.
  • start on your own machine before getting a server. A pi is enough for lightweight stuff but probably not for a fast and smooth experience with e.g. nextcloud.
  • backup.
  • search for help. If not available in a forum. ask for help. Dont waste many many hours if something isnt working. But research it first and read the documentation.
[–] [email protected] 10 points 8 months ago* (last edited 8 months ago)

Podman is not yet ready for mainstream, in my experience

My experience varies wildly from yours, so please don't take this bit as gospel.

Have yet to find a container that doesn't work perfectly well in podman. The options may not be the same. Most issues I've found with running containers boil down to things that would be equally a problem in docker. A sample:

  • "rootless" containers are hard to configure. It can almost always be fixed with "--privileged" or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don't have to.
  • network filesystems cause headaches, especially smbfs + sqlite app. I've had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
  • container networking--for specific cases--needs to managed carefully. These cases are identical for docker.

And that's it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can't do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.

[–] [email protected] 7 points 8 months ago (2 children)

Podman quadlets have been a blessing. They basically let you manage containers as if they were simple services. You just plop a container unit file in /etc/containers/systemd/, daemon-reload and presto, you've got a service that other containers or services can depend on.

load more comments (2 replies)
[–] [email protected] 7 points 8 months ago
  1. less is more, it's fine to sunset stuff you don't use enough to afford them using cpu cycles, memory and power
  2. search warrants are a real thing and you should not trust others to use your infrastructure responsibly because you will be the one paying for it if they don't.
[–] [email protected] 7 points 8 months ago (1 children)
  1. data stays local for the most part. Every file you send to the cloud becomes property of the cloud. Yeah, you get access, but so does the hosting provider, their 3rd party resources, and typical government compliances. Hard drives are cheap and fast enough.

  2. not quite answering this right, but I very much enjoy learning and evolving. But technology changes and sometimes implementing new software like caddy/traefik on existing setups is a PITA! I suppose if I went back in time, I would tell myself to do it the hard way and save a headache later. I wouldn't have listened to me though.

  3. Portainer is so nice, but has quirks. It's no replacement for the command line, but wow, does it save time. The console is nerdy, but when time is on the line, find a good GUI.

load more comments (1 replies)
[–] [email protected] 6 points 8 months ago (1 children)

For 2.: use dns-01 challenge to generate wildcard SSL certs. Saves so much time and nerves.

load more comments (1 replies)
[–] [email protected] 6 points 8 months ago
  1. Our internet goes out periodically, so having everything local is really nice. I set up DNS on my router, so my TLS certs work fine without hitting the internet.
  2. I wish someone would've taught me how to rip blurays. It wasn't a big deal, but everything online made it sound super sketchy flashing firmware onto a Bluray drive.
  3. I'm honestly not sure. I'm in CS and am really into Linux, so I honestly don't know what would be helpful. I guess start small and get one thing working at a time. There's a ton of resources online for all kinds of skill levels, and as long as you do one thing at a time, you should eventually see success.
[–] [email protected] 6 points 8 months ago (2 children)

For #2 and #3, it’s probably exceedingly obvious, but wish I would have truly understood ssh, remote VS Code, and enough git to put my configs on a git server.

So much easier to manage things now that I’m not trying to edit docker compose files with nano and hoping and praying I find the issue when I mess something up.

load more comments (2 replies)
[–] [email protected] 4 points 8 months ago* (last edited 8 months ago)

For me #2 would be "you have ADHD and won't be able to be medicated so just don't"

I've mentioned elsewhere my server upgrade project took longer than expected.

Just last night I threw it all into the trash because I just can't anymore

load more comments
view more: next ›