this post was submitted on 25 Feb 2024
52 points (100.0% liked)

Selfhosted

45542 readers
1265 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello, I currently have a home server mainly for media, in which I have an SSD for the system and 2 6TB hard drives set up in raid 1 using mdadm, its the most I can fit in the case. I have been getting interested in ZFS and wanting to expand my storage since it's getting pretty full. I have 2 12TB external hard drives. My question is can I create a pool (I think that's what they are called), using all 4 of these drives in a raidz configuration, or is this a bad idea?

(6TB+6TB) + 12TB + 12TB, should give me 24TB, and should work even if one of the 6TB or 12TB fails if I understand this correctly.

How would one go about doing this? Would you mdadm the 2 6TB ones into a raid 0 and then create a pool over that?

I am also just dipping my toes now into Nixos so having a resource that would cover that might be useful since the home server is currently running Debian. This server will be left at my parents house and would like it to have minimal onsite support needed. Parents just need to be able to turn screen on and use the browser.

Thank you

all 17 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 1 year ago* (last edited 1 year ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LVM (Linux) Logical Volume Manager for filesystem mapping
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

6 acronyms in this thread; the most compressed thread commented on today has 30 acronyms.

[Thread #542 for this sub, first seen 25th Feb 2024, 08:45] [FAQ] [Full list] [Contact] [Source code]

[–] [email protected] 6 points 1 year ago (2 children)

I believe ZFS works best when having direct access to the disks, so having a md underlying it is not best practice. Not sure how well ZFS handles external disks, but that is something to consider. As for the drive sizes and redundancy, each type should have its own vdev. So you should be looking at a vdev of the 2x6TB in mirror and a vdev of the 2x12TB in mirror for maximum redundancy against drive failure, totaling 18TB usable in your pool. Later on if you need to add more space you can create new vdevs and add them to the pool.

If you're not worried about redundancy, then you could bypass ZFS and just setup a RAID-0 through mdadm or add the disks to a LVM VG to use all the capacity, but remember that you might lose the whole volume if a disk dies. Keep in mind that this would include accidentally unplugging an external disk.

[–] [email protected] 1 points 1 year ago

Yeah definitely wanting the redundancy, most of what I will be storing will not be life changing if lost just a big inconvenience, and for the life changing stuff I plan on having it backed up to cloud storage.

[–] [email protected] 5 points 1 year ago (1 children)

You’ve got some decent answers already, but since you’re getting interested in ZFS, I wanted to make sure you know about discourse.practicalzfs.com. It’s the successor to the ZFS subreddit and it’s a great place to get expert advice.

[–] [email protected] 1 points 1 year ago

Thank you for that will have to have a look into it since I am quite new and I am not completely sure how to go about things in a way to not regret it later down the line in half a year or so.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (1 children)

Is there any particular reason you're interested in using ZFS?

How do you intend to move over the data on the 2x6 array if you create a new pool with all the drives?

mdadm RAID1 is easy to handle, fairly safe from write holes and easy to upgrade.

If it were me I'd upgrade to a 2x12 array (mdadm RAID1 or ZFS mirror, whichever you want), stored internally. And use the old 6 TB drives as cold storage external backups with Borg Backup. Not necessarily for the media files but you must have some important data that your don't want to lose (passwords, 2FA codes, emails, phone photos etc.)

I wouldn't trust USB-connected arrays much. Most USB enclosures and adapters aren't designed for 24/7 connectivity, and arrays (especially ZFS) are sensitive to the slightest error. Mixing USB drives with any ZFS pool is a recipe for headache IMO.

I could accept using the 2x6 as a RAID1 or mirror by themselves but that's it. Don't mix them with the internal drives.

Not that there's much you could do with that drive setup, since the sizes are mismatched. You could try unraid or snapraid+mergerfs which can do parity with mismatched drives but it's meh.

Oh and never use RAID0 as the bottom layer of anything, when a drive breaks you lose everything.

[–] [email protected] 3 points 1 year ago (1 children)

Make sure you understand volume block size before you start using it. It has a big impact on compression, performance and even disk utilization. In certain configurations you may be surprised to find out as much as 25% of your disk space (in addition to parity) is effectively gone and it is untrivial to change the block size after the fact.

[–] [email protected] 1 points 1 year ago

That's definitely I have to look into, the nixos page on ZFS had a link to a ZFS cheat sheet of sorts that I have been trying to wrap my head around, thanks for pointing it out though.