Proxmox is Debian under the hood. It's just a qemu and lxc management interface.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
yeah, and qemu and lxc are very much legacy at this point. Stick with docker/podman/kubernetes for containers.
right tool for the job mate, not everything works great in a container.
Also Proxmox is not legacy as its used a lot in homelabs and also some companys
I use proxmox to carve up my dedicated host with OVH, 3 of the vms run docker anyway.
I'm not saying it's bad software, but the times of manually configuring VMs and LXC containers with a GUI or Ansible are gone.
All new build-outs are gitops and containerd-based containers now.
For the legacy VM appliances, Proxmox works well, but there's also Openshift virtualization aka kubevirt if you want take advantage of the Kubernetes ecosystem.
If you need bare-metal, then usually that gets provisioned with something like packer/nixos-generators or cloud-init.
Yes, but no. There is still a lot of places using old fashioned VMs, my company is still building VMs from an AWS ami and running ansible to install all the stuff we need. Some places will move to containers and that's great, but containers won't solve every problem
Yes, it's fine to still have VMs, but you shouldn't be building out new applications and new environments on VMs or LXC.
The only VMs I've seen in production at my customers recently are application test environments for applications that require kernel access. Those test environments are managed by software running in containers, and often even use something like Openshift Virtualization so that the entire VM runs inside a container.
but you shouldn't be building out new applications and new environments on VMs or LXC
That's a bold statement, VMs might be just fine for some.
Use what ever is best for you, if thats containers great. If that's a VM, sure. Just make sure you keep it secure.
Some of us don't build applications, we use them as built by other companies. If we're really unlucky they refuse to support running on a VM.
Why would you install a GUI on a VM designated to run a Docker instance?
You should take a serious look at what actual companies run. It's typically nested VMs running k8s or similar. I run three nodes, with several VMs (each running Docker, or other services that require a VM) that I can migrate between nodes depending on my needs.
For example: One of my nodes needed a fan replaced. I migrated the VM and LXC containers it hosted to another node, then pulled it from the cluster to do the job. The service saw minimal downtime, kids/wife didn't complain at all, and I could test it to make sure it was functioning properly before reinstalling it into the cluster and migrating things back at a more convenient time.
I'm a DevOps/ Platform Engineering consultant, so I've worked with about a dozen different customers on all different sorts of environments.
I have seen some of my customers use nested VMs, but that was because they were still using VMware or similar for all of their compute. My coworkers say they're working on shutting down their VMware environments now.
Otherwise, most of my customers are running Kubernetes directly on bare metal or directly on cloud instances. Typically the distributions they're using are Openshift, AKS, or EKS.
My homelab is all bare metal. If a node goes down, all the containers get restarted on a different node.
My homelab is fully gitops, you can see all of my kubernetes manifests and nixos configs here:
Sometimes, VMs are simply the better solution.
I run a semi-production DB cluster at work. We have 17 VMs running and it's resilient (a different team handles VMWare and hardware)
You are going to what, install Kubernetes on every node?
It is far easier and more flexible to use VMs and maybe some VM templates and Ansible.
QEMU is legacy? Pray tell me how you're running VMs on architectures other than x86 on modern computers without QEMU
Agreed.
I run podman w/ rootless containers, and it works pretty well. Podman is extra nice in that it has decent suppor for kubernetes, so there's a smooth transition path from podman -> kubernetes if you ever want/need it. Docker works well too, and docker compose
is pretty simple to get into.
Yeah, Kubernetes is more automated and expandable, but docker compose has a ton of good examples and it's really easy to get into as a beginner.
Kubernetes is also designed for clustered workloads, so if you are mostly hosting on one or two machines, YAGNI applies.
I recommend people start w/ docker compose due to documentation, but I personally am switching to podman quadlets w/ rootless containers.
Yeah, definitely true.
I'm a big fan of single-node kubernetes though, tbh. Kubernetes is an automation platform first and foremost, so it's super helpful to use Kubernetes in a homelab even if you only have one node.
None of your listed use cases will even come close to taxing the 6600k. It's going to probably sit happily in idle states most of the time.
Proxmox also has great snapshotting and backup features. Makes it easier to mess around with your containers/VMs without worrying too much.
Only when using zfs, which op is not.
Your CPU should be perfectly capable of that. I ran Proxmox with some VMs and containers on an i5-2400 with 16GB RAM just fine.
You could run on bare Debian as well but virtualization will give you more flexibility. If you get a Zigbee Dongle or the like, you can pass it through to the VM Home Assistant is running in.
I don't know MergeFS but usually the recommendation is ZFS.
would agree the hardware would run everything fine
Promox runs on debian. But anyway you will be surprised about proxmox can run in limited hardware. I have it running in a garbage mini PC and an old notebook :D
I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.
Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.
Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.
On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.
So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.
If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.
Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.
Thanks so much for mentioning this, trying it out now
My needs are pretty similar to yours and I've recently moved back to using hypervisors after running everything from Debian to Arch to NixOS bare-metal over the last decade or so. It's so easy to bring-up/tear-down environments, which is great for testing things and pretty much the whole point of a homelab. I've got a few VMs + one LXC running on Proxmox with some headroom on a 6th gen i7, you should be fine resource wise tbh. Worth mentioning that you'll most likely need to passthrough your drives to the guest VM which is not supported via the webUI, but the config is documented on their wiki.
Overall, I'm happy with this setup and loving CoreOS as a base-OS for VMs and rootless podman containers for applications.
Not to mention: Snapshots.
Proxmox is Debian. :-)
I do always suggest installing Debian first, and then installing Proxmox on top. This allows you to properly set up your disks, and networking as needed, as the Proxmox installer is a bit limited: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
Once you have it up and running, have a look at the CT Templates. There's a whole set of pre-configured templates from TurnkeyLinux (again, debian+) that make it trivial to set up all kinds of services in lightweight LXC Containers.
For Home Assistant a VM is your best bet, as it makes setting up connectivity way easier than messing with docker networking. It also allows easy USB passthrough, for things like ZWave/Zigbee/Bluetooth adapters.
I would just install Proxmox since it is way easier
Also last time I checked the Debian installer didn't support ZFS
I do always suggest installing Debian first, and then installing Proxmox on top.
Correct me if I'm wrong, but isn't Proxmox it's own OS unto itself? What would be the advantage of installing Proxmox 'on top of' Debian when it's Debian already as you pointed out?
You have some options that aren't in the installer e.g. full disk encryption
hmmmm. Wouldn't you have to remove the Debian kernal and use the Proxmox kernal? Sorry, not trying to be obtuse, I just have never installed Proxmox 'on top' of Debian. I always opted for the clean install.
Yes, but that's a supported way to install Proxmox.
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm
OP, I'm running Proxmox on and old Dell T320 /32gb RAM. I am not having any real issues doing so. I run Docker and a handful of Docker containers. I'm really not into the arr stack, but I wouldn't think you'd have much issue.
Proxmox is pretty much focused on ZFS, LXC containers and VMs. You want mergerFS and Docker. I say avoid Proxmox and go for Debian or another distro.
I don't know about your first need ("MergerFS") but if you find useful, I have an old Intel NUC 6i3SYH (i3-6100U) with 16Gb RAM and I was running with Windows 10 for Plex+Arr and also HomeAssistant in VirtualBox. I was running into issues until I switched to Proxmox. Now I'm running Proxmox to run Docker with a bunch of containers (plex+arr and others) and also a virtual machine which has HomeAssistant and everything was smooth. I have to say that there is a learning curve, but it's very stable.
I use OpenMediaVault to run something similar. It’s a headless Debian distribution with web based config. Takes a bit of work but I like it.
MergerFS and SnapRAID could be good for you. It's not immediate parity like with ZFS RAID (You run a regular cronjob to calculate RAID parity) but it supports mismatched drive sizes, expansion of the pool at any time, and some other features that should be good for a media server where live parity isn't critical.
Proxmox and TrueNAS are nice because they help manage ZFS and other remote management within a nice UI but really you can just use Debian with SSH and do the same stuff. DietPi has a few nice utilities on top of Debian (DDNS manager and CLI fstab utilities, for example)but not super necessary.
Personally I use TrueNAS but I also used DietPi/Debian for years and both have benefits and it really matters what your workflow is. OMV supports everything you want too (incouding SnapRAID) but takes extra setup which put me off.
Docker or LXC containers won't hurt your performance btw. There's supposedly some tiny overhead but both are designed to use the basic Linux system as much as possible: they're way faster than on WSL. For hardware acceleration it'll be deferred to the GPU for most things and there's lots of documentation to set it up. The best thing about docker is that every application is kept separate to eachother - updates can be done incrementally and rollbacks are possible too!
Thanks everyone, I feel much better about moving forward. I'm leaning towards Proxmox at this point because I could still run Windows as a VM while playing around and setting up a new drive pool. I'd like a setup that I can gradually upgrade because I don't often have a full day to dedicate to these matters.
MergerFS still seems like a good fit for my media pool, simply only to solve an issue where one media type is filling a whole drive as another sits at 50% capacity. I've lost this data before and it was easy to recover by way of my preferred backup method (private torrent tracker with paid freeleech). A parity drive with SnapRaid might be a nice stop gap. I don't think I feel confident enough with ZFS to potentially sacrifice uptime.
My dockers and server databases, however, are on a separate SSD that could benefit from ZFS. These files are backed up regularly so I can recover easily and I'd like as many failsafes as possible to protect myself. Having my Radarr database was indispensable when I lost a media drive a few weeks ago.
use nixos! you won't regret it
Not calling you out specifically OP, but can someone tell me why this is a thing on the internet?
multiple 12GB drives
GB??? I assume TB automatically when people say this but it still is a speedbreaker when I'm thinking about the post.
Good catch, yes my drives are 12TB. My brain is still stuck in 2005. :)