Don't forget to donate to your favorite OSS projects.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I quickly got pissed at synology and QNAP and just started making my own shit. Now when anything fails it’s my own damn fault and I can actually fix it. This sounds bad but it’s actually a much better experience. I learn a lot and have fun. I’m the guy who made all those G4 cube retrofit kits on Thingiverse. It’s been a great distraction for me over the years.
On the subject of containers, learn podman. That’s where everybody seems to be migrating to.
Thank you, I'll add podman to the list of things to checkout. Feels good to know I'll get to set this up however I want
There's also Incus, but if you'll be using your TrueNAS box to host the containers, I suggest you stick to Docker as it's the default. If you're building a second container box, Proxmox, Docker, Podman, and Incus are your best bets. Choose what fits your expertise and needs best.
Yup, my servers just run bare Debian and ZFS and I have backup scripts that parse the docker compose files for how often to run and keep backups.
I quickly got pissed at synology and QNAP and just started making my own shit.
It sucks, because I really like Synology's ecosystem--but I don't buy vendor lock-in devices. Luckly we have arc that lets you use SynologyOS on bare metal. If you get get it working with your hardware it's badass.
Why they don't sell home server licenses for SynologyOS is beyond my understanding. It's a really nice little OS and is specifically designed for NAS.
When my QNAP finally died on me, I decided to build a DIY NAS and did consider some of the NAS OSes, but I ultimately decided that I really just wanted a regular Linux server. I always find the built-in app stores limiting and end up manually running Docker commands anyways so I don't feel like I ever take advantage of the OS features.
I just have an Arch box and several docker-compose files for my various self-hosting needs, and it's all stored on top of a ZFS RaidZ-1. The ZFS array does monthly scrubs and sends me an email with the results. Sometimes keeping it simple is the best option, but YMMV.
I like Unraid because it's essentially "just Linux" but with a nice web UI. It's got a great UI for Docker, VMs (KVM) and Linux containers (LXC).
Just got unraid up and running for the first time today. There’s a bit of a learning curve coming from TrueNAS Scale but it supports my use case: throwing whatever spinning rust I have into one big array. Seems to work alright, hardware could use additional cooling so I’ve shut it off until a new heatsink arrives.
What made you switch from TrueNAS Scale to Unraid, if I may ask? Is it just the ability to mix different drive sizes? I'm currently using TrueNAS Core and thinking about migrating to TrueNAS Scale.
Yes, that’s the only reason. You can mix drive sizes and still have a dedicated parity drive to rebuild from in case things go poorly. I am aware that it’s basically LVM with extra steps, but for a NAS I just want it to be as appliance-like as possible.
Still using Scale at work, though - that use case is different.
Thanks for your response!
My NASs are purely NAS, I prefer a Debian server for... Pretty much everything. But my storage only does storage, I keep those separate (even for an old PC acting as a NAS).
No matter what goes down, I can bring it back up, even with a hardware failure.
I used to do that. I had a QNAP NAS and a small Intel NUC running Arch that would host all my services. I would just mount the NAS folders via Samba into the NUC. Problem is that services can't watch the filesystem for changes. If I add a video to my Jellyfin directory, Jellyfin won't automatically initiate a scan.
Nowadays, I just combine them into one. Just seems simpler that way.
I just have my downloader trigger a scan at completion.
I have a few proxmox clusters going, combining it all wouldn't be practical. This way my servers (tiny/mini/micros I've repurposed) stay small with decent sized ssd's, big storage in 2 NAS's, and a third for backups.
I have feeling I may find myself here in time, as I develop this setup more.
When you end up having a mini homelab look into komo.do for container orchestration over the overkill options like kubernetes or portainer
anything worth doing is worth overdoing
I prefer dockge for putting all of my compositions in one place.
And being able to manage multiple hosts in one UI is the absolute tits. There are a few features I miss from portainer but none strong enough to pull me back. And no bs SaaS licensing and costs...
But k3s so niiiice.
So what's the threshold for 'mini' vs 'you need to stop'...? Number of hosts, or number of containers, or number of public services, or...
Not sure, currently have 8 nodes and 40 apps running
When you lose a system. It responds to ping; all services are up, but you can't find the damn thing.
So, not a number so much as a limit to your organizational skill+effort.
Consider that a new power efficient CPU may be cheaper by consuming less electricity over a few years!
I hadn't considered that! Thank you.
I'm hoping the OS, as it's designed for this, is going to be helpful in getting the right balance with power usage.
You can calculate it !
Take your power usage and compute the cost over a year.
I will soon add a SSD because i finally moved from a RAID 1 to RAID 5 (so more HDDs), it consume more electricity.
I can measure how much power it draw because the server is on a smart plug.
I calculated an additional 20-30€ a year of electricity, adding a SSD for read/write cache would allow the HDDs to stop spinning, make things faster and will be cost effective over a few years.
To put this into perspective for you, if your NAS sits at idle for 90% of the time (probably true) and an older CPU is 50w (kinda high, but maybe) and a newer CPU is 15w, over an entire year it will save you around 305.76 kWh. Average price per kWh in the USA is 12.89¢. So over a year a new CPU can reasonably save you around $39.41. So it's not nothing, but it's nothing crazy, but lower idle wattage = lower temp = components last longer, which is the real savings.
If an older CPU is only gonna last you 5 years, when a new might last 10, you're going to save almost $400 in energy and generally a CPU today is going to be cheaper than a CPU in 10 years (probably). So it makes sense to spend an extra $200 on a newer CPU and still net a $200 savings over 10 years vs the older CPU.
i like TrueNas! and after trying out True Nas on bare metal for a year or two, now I run it as a VM under Proxmox.
so awesome
You're the second person to suggest that approach. I'll check it out before I do setup next week. Thanks!
I've tried TrueNAS, Rockstor, Openfiler (iSCSI), EasyNAS, and a few others and TrueNAS is easily the favorite. Running it alongside Proxmox is ideal if your server is beefy enough.
If you are concerned about TruNAS, go look at Xigmanas. This is the original FreeNAS project before iX acquired the name.
Expect to be ostracised here but if your drives are "junk" (some have SMR), I got better parity performance with Windows Storage Spaces (WSS) than with Unraid. Recoverability and compatibility with old junk hardware was very good too, whereas the bits I had lying around gave me Linux driver conflicts. Trying to install ZFS on Linux gave me a headache, and I then realised I couldn't expand the array easily when I found other cheap crappy drives to add. WSS doesn't care, it just keeps trucking.
As for a licence, the old "upgrade from the windows 7 enterprise key that got leaked" trick did it for me. Never paid for it.
I found that I needed to spend more on components with better driver support to have a working NAS on Linux. Windows isn't open source, but for me it was the cheapest total cost option, and you can still run your containers in it.
I reckon maybe performance is worse on write for WSS? I paid for a PrimoCache licence to fix that though, and now my SSD gets used for initial writes and slowly spools over to the array as the array is able to calculate parity and write with my 10 year old CPU.
Welcome! I personally run proxmox as my host os then virtualize a truenas core VM and have my docker setup in another lxc. A bit more complex than just straight up truenas but its saves me before. I'd recommend looking into it
You have plenty of options. I use Unraid because I bought it before it became a subscription. But I have a friend running Fedora server with Cockpit and running everything from docker containers. The options are endless. ProxMox is a great choice.
Late to the party but I decided to pickup a 13th gen ASUS NUC with an i7 over a prebuilt NAS, bought a couple external hard-disk bays setup Proxmox running a headless Debian 12 VM and almost everything runs great however, mistake was using Debian 12 because the Linux kernel is pretty far out of date and does not support the CPU properly.
What's the self hosted guide to security when opening up ports to the public ?
Don't. Use a VPN like Tailscale or Wireguard. Tailscale uses the Wireguard protocol but it's very easy to configure, and will automatically set up a peer-to-peer mesh network for you (each node on the VPN can directly reach any other node, without having to route through a central server).
The only things that should be exposed publicly are things that absolutely need to be - for example, parts of Home Assistant need to be publicly exposed if you use the Google Assistant or Alexa integrations, since Google and Amazon need to be able to reach it.
Use tailscale for host nodes, use tailscale docker container in a compose stack with an app that you sidecar to. That way that app is on your tailnet as if it is its own computer. Use tailscale serve for reverse proxying support of the apps. Then, setup a vps node (I use linodes $5 node) with tailscale and configure that to be your DMZ into your tailnet.
For DMZ, use Caddy, UFW, and fail2ban. Also take advantage of ACLs in the Tailscale admin console to only have the VPS able to route traffic to specific apps you want to expose. My current project is to work in Authelia into this setup so a user logs into one exposed app and is able to traverse to other exposed apps through header / token authentication.
Oh also, segment the tailnet using different authentication keys. Each host node should have its own key, all the apps on a host node should have a shared key, and all public facing clients should have a common shared key. That way in case of compromise you can revoke the affected keys without bringing down your network.
Basically not to. Open one for a VPN like Wireguard to accept incoming connections, and that's it. Use the VPN to connect to your home network and access your services that way.
deploy to dmz
filtered by fw
host based isolation
zerotrust
etc