15
submitted 1 day ago* (last edited 1 day ago) by [email protected] to c/[email protected]

Hi, right now I'm in the planning process for a self hosted virtualization and containerization environment on one or more Linux hosts. Incus looks promising. And there will be mainly Linux instances. I'm not sure how to solve the shared storage issue - since it is a bad idea to mount a fs more than once. Maybe you have some hints? I'd appreciate that. :)

The OS of an instance can sit on an exclusively used volume, that is solved for me (store it in a local storage pool).

But how should I organize shared read/write storage? It should be accessed by multiple instances at the same time. It should be easily usable as a mount point. Storage replication among multiple hosts is optional - there is rsync. Is NFS still the way to go or are there nicer options? Is there an overlayfs which could resolve concurrent writes?

top 10 comments
sorted by: hot top new old
[-] [email protected] 7 points 1 day ago

There are a bunch of options available. I think the exact layout depends on the exact use-case. From GlusterFS, Ceph, to (S3 compatible) block storage, to straightforward NFS, to database replication, that's all for different use-cases like VM failover to decoupling storage from a service, to something like a Jellyfin sharing the media library with another service, to horizontal scaling of services... I don't think there is a single answer to all of that.

[-] [email protected] 2 points 1 day ago

Thanks. I will take a closer look into GlusterFS and Ceph.

The use case would be a file storage for anything (text, documents, images, audio and video files). I'd like to share this data among multiple instances and don't want to store that data multiple times - it is bad for my bank account and I don't want to keep track of the various redundant file sets. So data and service decoupling.

Service scaling isn't a requirement. It's more about different services (some as containers, some as VMs) which should work on the same files, sometimes concurrently.

That jellyfin/arr approach works well and is easy to set up, if all containers access the same docker volume. But it doesn't when VMs (KVM) or other containers (lxc) come into play. So I can't use it in this context.

Failover is nice to have. But there is more to it than just the data replication between hosts. It's not a priority to me right now.

Database replication isn't required.

[-] [email protected] 2 points 1 day ago

GlusterFS is (was) really cool, but I would not set up a new instance. It used to have significant support and development from RedHat, but they decided to halt their work on it, and focus on Ceph.

GlusterFS is getting a few slow updates from some alternate developers, but I would only count on that being fixes for current installations.

[-] [email protected] 2 points 1 day ago* (last edited 1 day ago)

Just be warned that those two are relatively complicated pieces of tech. And they're meant to set up a distributed storage network including things like replication and load-balancing. Clusters with failover to a different datacenter and such. If you just want access to the same storage on one server from different instances, that's likely way to complicated for you. (And more complexity generally means more maintenance and more failure modes.)

[-] [email protected] 2 points 1 day ago

Moot point. I do not really need the distributed storage part for my scenario. Not right now.

Maybe I start with NFS and explore gluster as soon as storage distribution is needed. Looks like it could be a drop-in eplacement for NFSv3. Since it doesn't access the block devices directly, I still could use the respective fs' tool set (I.e. ext4 or btrfs) for maintenance tasks.

[-] [email protected] 2 points 1 day ago

VirtioFS. You can share from the host to any number of VMs with that. LibVirtd is good. Even has a nice GUI in virt-manager.

[-] [email protected] 1 points 1 day ago

Here is what incus supports. If you have multiple hypervisor hosts then you are talking about remote storage.

https://linuxcontainers.org/incus/docs/main/explanation/storage/

[-] [email protected] 1 points 1 day ago

What device are you going to put this shared storage on?

[-] [email protected] 1 points 1 day ago

Thanks for asking. I left that detail out. An SSD which is attached to the virtualization host via SATA. I plan to use either a LVM2 volume group or a BTRFS with subvolumes to provide the storage pool to Incus/LXC.

[-] [email protected] 1 points 1 day ago

Just use NFS then.

this post was submitted on 25 Jul 2025
15 points (100.0% liked)

Selfhosted

49825 readers
653 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS