this post was submitted on 24 Feb 2024
199 points (100.0% liked)

Selfhosted

44837 readers
978 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

With free esxi over, not shocking bit sad, I am now about to move away from a virtualisation platform i’ve used for a quarter of a century.

Never having really tried the alternatives, is there anything that looks and feels like esxi out there?

I don’t have anything exceptional I host, I don’t need production quality for myself but in all seriousness what we run at home end up at work at some point so there’s that aspect too.

Thanks for your input!

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 100 points 1 year ago* (last edited 1 year ago) (3 children)
  • KVM/QEMU/Libvirt/virt-manager on a Debian 12 for minimal installation that allows you to choose backup tools and the like on your own.
  • Proxmox for a mature KVM-based virtualizer with built in tools for backups, clustering, etcetera. Also supports LXC. https://github.com/proxmox
  • Incus for LXC/KVM virtualization - younger solution than Proxmox and more focused on LXC. https://github.com/lxc/incus
[–] [email protected] 19 points 1 year ago* (last edited 1 year ago) (6 children)

/thread

This is my go-to setup.

I try to stick with libvirt/virsh when I don't need any graphical interface (integrates beautifully with ansible [1]), or when I don't need clustering/HA (libvirt does support "clustering" at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.

Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).

Re incus: I don't know for sure yet. I have an old LXD setup at work that I'd like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

[–] [email protected] 10 points 1 year ago (1 children)

I use cockpit and my phone to start my virtual fedora, which has pcie passthrough on gpu and a usb controller.

Desktop:

Mobile:

[–] [email protected] 6 points 1 year ago

We use cockpit at work. It's OK, but it definitely feels limited compared to Proxmox or Xen Orchestra.

Red Hat's focus is really on Openstack, but that's more of a cloud virtualization platform, so not all that well suited for home use. It's a shame because I really like Cockpit as a platform. It just needs a little love in terms of things like the graphical console and editing virtual machine resources.

[–] [email protected] 4 points 1 year ago (2 children)

Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

Maybe you should consider consolidating into Incus. You’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potential issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (3 children)

Hey look, it's the Incus guy. Every time this topic comes up, you chime in and roast Proxmox and it potential issues with a link go a previous comment roasting Proxmox and it's potential issues and at no point go into what those potential issues are outside of the broad catch all term of 'bloat'.

I respect your data center experience, but I wish you were more forward with your issues instead of broad, generalized terms.

As someone with much less enterprise experience, but small business it administration experience, how does Incus replace ESXi for virtual machines coming from the understanding that "containerization is the new hotness but doesn't work for me" angle?

load more comments (3 replies)
[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (5 children)

The migration is bound to happen in the next few months, and I can't recommend moving to incus yet since it's not in stable/LTS repositories for Debian/Ubuntu, and I really don't want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (...) so that would still be an improvement, I guess.

Management is already sold to the idea of Proxmox (not by me), so I think I'll take the path of least resistance. I've had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with... do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I'd still like to put a word of caution about that.

load more comments (5 replies)
load more comments (4 replies)
[–] [email protected] 3 points 1 year ago (2 children)
[–] [email protected] 7 points 1 year ago (1 children)

They're obviously looking for a type 1 hypervisor like Esxi. A type 2 hypervisor like virtualbox does not fit the bill.

[–] [email protected] 4 points 1 year ago (1 children)

What is the difference between type 1 & 2 please ?

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago) (1 children)

Type 1 runs on bare metal. You install it directly onto server hardware. Type 2 is an application (not an OS) lives inside an OS, regardless of whether that OS is a guest or a host, the hypervisor is a guest of that platform, and the VMs inside it are guests of that hypervisor.

[–] [email protected] 2 points 1 year ago (1 children)
[–] [email protected] 4 points 1 year ago

The previous comment is an excellent summary. It is worth noting that there are some type 1 hypervisors that can look like type 2s. Specifically, KVM in Linux (which sometimes gets referred to as Virt-manager, Virtual Machine Manager, or VMM, after the program typically used to manage it) and Hyper-V in Windows.

These get mistaken for type 2 hypervisors because they run inside of your normal OS, rather than being a dedicated platform that you install in place of it. But the key here is that the hypervisor itself (that is, the software that actually runs the VM) is directly integrated into the underlying operating system. You were installing a hypervisor OS the whole time, you just didn't realise it.

The reason this matters is that type 1 hypervisors can operate at the kernel level, meaning they can directly manage resources like your memory, CPU and graphics. Type 2 hypervisors have to queue with all the other pleb software to request access to these resources from the OS. This means that type 1 hypervisors will generally offer better performance.

With hypervisor platforms like Proxmox, Esxi, Hyper-V server core, or XCP-NG, what you get is a type 1 hypervisor with an absolutely minimal OS built around it. Basically, just enough software to the job of running VMs, and nothing else. Like a drag racer.

[–] [email protected] 3 points 1 year ago

VB is awful.

And I use it every day.

It's like a first-try at a hypervisor. Terrible UI, with machine config scattered around. Some stuff can only be done on the command line after you search the web for how to do it (like basic stuff, say run headless by default). Enigmatic error messages.

[–] [email protected] 3 points 1 year ago

This is what I would recommend too - QEMU + libvirt with Sanoid for automatic snapshot management. Incus is also a solid option too

[–] [email protected] 38 points 1 year ago

Proxmox works well for me

[–] [email protected] 29 points 1 year ago (1 children)
[–] [email protected] 2 points 1 year ago

This is the way

[–] [email protected] 19 points 1 year ago (1 children)

If you're running mostly Linux vms proxmix us really good. It's based on kvm and has a really nice feature set.

[–] [email protected] 18 points 1 year ago

Windows guests also run fine on KVM, use the Virtio drivers from Fedora project.

[–] [email protected] 13 points 1 year ago (3 children)

I've used Hyper-V and in fact moved away from ESXi long ago. VMWare had amazing features but we could not justify the ever-increasing costs. Hyper-V can do just about anything VMWare can do if you know Powershell.

[–] [email protected] 3 points 1 year ago

Seconded for Hyper-V, and MUCH easier to patch the free edition than ESXi.

[–] [email protected] 3 points 1 year ago

Another vote for Hyper-V. Moved to it from ESXi at home because I had to manage a LOT of Hyper-V hosted machines at work, so I figured I’d may as well get as much exposure to it as I could. Works fine for what I need.

[–] [email protected] 2 points 1 year ago

I use it with WAC on my home server and it's good enough for anything I need to do. Easy to create VMs using that UI, PS not even needed.

[–] [email protected] 8 points 1 year ago (1 children)

I'm pretty happy with XCP-ng with their XenOrchestra management interface. XenOrchestra has a free and enterprise version, but you can also compile it from source to get all the enterprise features. I'd recommend this script: https://github.com/ronivay/XenOrchestraInstallerUpdater

I'd say it's a slightly more advanced ESXi with vCenter and less confusing UI than Proxmox.

load more comments (1 replies)
[–] [email protected] 8 points 1 year ago (1 children)

Qemu/virt manager. I've been using it and it's so fast. I still need to get the clipboard sharing working but as of right now it's the best hypervisor I've ever used.

[–] [email protected] 2 points 1 year ago

I love it. Virtmanager connecting over ssh is so smooth.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
ESXi VMWare virtual machine hypervisor
HA Home Assistant automation software
~ High Availability
LTS Long Term Support software version
LXC Linux Containers
ZFS Solaris/Linux filesystem focusing on data integrity
k8s Kubernetes container management package

[Thread #540 for this sub, first seen 24th Feb 2024, 11:35] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 6 points 1 year ago (2 children)

If you are dipping toes into containers with kvm and proxmox already, then perhaps you could jump into the deep end and look at kubernetes (k8s).

Even though you say you don't need production quality. It actually does a lot for you and you just need to learn a single API framework which has really great documentation.

Personally, if I am choosing a new service to host. One of my first metrics in that decision is how well is it documented.

You could also go the simple route and use docker to make containers. However making your own containers is optional as most services have pre built ones that you can use.

You could even use auto scaling to run your cluster with just 1 node if you don't need it to be highly available with a lot of 9s in uptime.

The trickiest thing with K8s is the networking, certs and DNS but there are services you can host to take care of that for you. I use istio for networking, cert-manager for certs and external-dns for DNS.

I would recommend trying out k8s first on a cloud provider like digital ocean or linode. Managing your own k8s control plane on bare metal has its own complications.

[–] [email protected] 8 points 1 year ago

There are also full-suites like rancher which will abstract away a lot of the complexity

[–] [email protected] 2 points 1 year ago (1 children)

K8s is great, but you're chaning the subject and not answering OPs question. Containers =/= VMs.

load more comments (1 replies)
[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

I actually moved everything to docker containers at home... Not an apples to apples, but I don't need so many full OSs it turns out.

At work we have a mix of things running right now to see. I don't think we'll land on ovirt or openstack. It seems like we'll bite the cost bullet and move all the important services to amazon.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago)

I do do not not know— however, that logo is amazing

EDIT: Found it — https://sega-ai.neocities.org/

[–] [email protected] 5 points 1 year ago (1 children)

OOTL and someone who only uses a vm once every several years for shits & grins: What happened to vmware?

[–] [email protected] 12 points 1 year ago (1 children)

As part of the transition of perpetual licensing to new subscription offerings, the VMware vSphere Hypervisor (Free Edition) has been marked as EOGA (End of General Availability). At this time, there is not an equivalent replacement product available.

For further details regarding the affected products and this change, we encourage you to review the following blog post: https://blogs.vmware.com/cloud-foundation/2024/01/22/vmware-end-of-availability-of-perpetual-licensing-and-saas-services/

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

Whelp..boo-urns. :(

[–] [email protected] 5 points 1 year ago (3 children)

Minikube and try to get everything on Kubernetes?

[–] [email protected] 5 points 1 year ago (1 children)

Where does running VMs compare in any way to what Kubernetes does?

[–] [email protected] 2 points 1 year ago

Depends on what you want to self host? Could be worth it to see if what you self host can be deployed as containers instead

[–] [email protected] 4 points 1 year ago

Kubernetes yes, but minikube is kinda meh as a way to install it outside of development environments.

There’s so many better manageable ways like RKE/Rancher (which gives you the possibility to go k3s),Kubespray or even kubeadm.

All of those will result in a cluster that's more suitable for running actual workloads.

[–] [email protected] 2 points 1 year ago (2 children)

I wouldn’t recommend going K8S only in a homelab. Too much effort and some things don’t fit well (Home Assistant, Gaming VM?)

load more comments (2 replies)
[–] [email protected] 5 points 1 year ago (1 children)

I know everyone says to use Proxmox, but it's worth considering xcp-ng as well.

[–] [email protected] 4 points 1 year ago

For home have a crack at KVM with front ends like proxmox or canonical lxd manager.

In an enterprise environment take a look at Hyper-V or if you think you need hyper converged look at Nutanix.

[–] [email protected] 2 points 1 year ago (1 children)

I'm moving to oVirt.

Proxmox was out and oVirt was an excellent fit.

Choose carefully ; don't just go with the herd.

[–] [email protected] 7 points 1 year ago

What does oVirt offer that proxmox doesn't? I'm asking because I want to move an ESXi server to another hypervisor, I'm 90% sure it'll be Proxmox, but I'd like to know my options.

load more comments
view more: next ›