this post was submitted on 05 Mar 2025
12 points (100.0% liked)

Sysadmin

8399 readers
1 users here now

A community dedicated to the profession of IT Systems Administration

No generic Lemmy issue posts please! Posts about Lemmy belong in one of these communities:
!lemmy@lemmy.ml
!lemmyworld@lemmy.world
!lemmy_support@lemmy.ml
!support@lemmy.world

founded 2 years ago
MODERATORS
12
Time to reassess the vSAN (blocksandfiles.com)
submitted 2 weeks ago* (last edited 2 weeks ago) by cm0002@lemmy.world to c/sysadmin@lemmy.world
 

As IT leaders move away from VMware, they face a critical decision: do they stick with traditional storage architectures, or is now the time to finally unlock the full potential of an infrastructure that converges virtualization, storage, and networking technologies?

Early convergence efforts centered on hyperconverged infrastructure (HCI), where storage ran as a virtual machine under the hypervisor, commonly called a vSAN. While adoption has lagged behind traditional three-tier architectures, recent advancements have significantly improved vSAN, making it worth reconsidering by addressing past shortcomings.

top 11 comments
sorted by: hot top controversial new old
[–] mosiacmango@lemm.ee 12 points 2 weeks ago (1 children)

The above is an ad, but it both open about it and technically interesting, if a bit repetitive.

I honestly had no idea a vSAN was primarily running on hidden storage VMs on the host. I thought the storage stack was as tightly integrated as networking/compute at the hypervisor level, so the light breakdown there was worth the read.

[–] computergeek125@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Not all of them. Ceph on Proxmox and (iirc) VMware vSAN run bare metal. That statement was a call-out post for Nutanix, which runs their storage inside a VM cluster. Both of these have been doing so for years.

[–] mosiacmango@lemm.ee 1 points 2 weeks ago (1 children)

Okay, that tracks better. Im familiar with ceph and promox using it as a "fake" vSAN. Im also familiar with Vmwares vSAN and had never seen any indicator of an internal storage VM, so that was odd.

It being Nutanix doing the above makes sense as I haven't worked with them yet.

[–] computergeek125@lemmy.world 1 points 2 weeks ago (1 children)

Out of curiosity why would you call Ceph a fake HCI? As far as I've seen, it behaves akin to any of the other HCI systems I've used.

[–] mosiacmango@lemm.ee 1 points 2 weeks ago* (last edited 2 weeks ago)

Id call it viable hyper-converged infastructure when it in use as such like with proxmox, but its not scoped to just being a vSAN. Its a distributed storage network. its design is way wider than just being used for HCI/vSAN/etc.

[–] comador@lemmy.world 8 points 2 weeks ago

"comparing VMware vSAN, Nutanix AOS Storage and VergeIO VergeOS"

Hint: There is no vSAN/UCI/HCI silver bullet.

They all suffer from performance related issues either up front or over time, have scalability deficits or limitations and lackluster price to performance ratios (Nutanix can suck my 5yr forced upgrade balls).

[–] BenM2023@lemmy.world 4 points 2 weeks ago

I have a small cluster running using Starwind for my vSAN. For me it's much cheaper than a hardware equivalent and is performant enough.

Oh and I haven't had a "stop work" issue with it in 8 years.

Somewhat remarkably it was OK performance-wise when sync/iSCSI traffic were running on 1Gb copper connections to spinning rust storage... Now I have 10Gb fibre between the hosts, coupled with nvme drives, and it's quite (comparatively) quick.

As with all things YMMV... But vSAN is the way for my use case.

[–] Brkdncr@lemmy.world 4 points 2 weeks ago

Did small Nutanix deployments a while ago, and liked it enough to go all in on vSAN for the datacenter about 7 years ago.

No regrets. Great performance, no san storage, no San network. Easy to manage on dell readynodes.

Going to go all in with Nutanix next round. Same costs but it’s not Broadcom.

[–] jlh@lemmy.jlh.name 3 points 2 weeks ago (1 children)

Ceph / Openshift Data Foundation is also an option for hyper converged clustered storage.

[–] ikidd@lemmy.world 3 points 2 weeks ago (1 children)

Ceph is also quite tightly integrated on Proxmox for smaller deployments.

[–] jlh@lemmy.jlh.name 1 points 2 weeks ago* (last edited 2 weeks ago)

Yup! Ceph is quite nice once you're up at the ~6 node scale. Openshift Data Foundation is based on Rook, which is a nice way to automate Ceph deployment, as well.

I've actually been running Rook on my home server rack, up to 120TiB now 😁