It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.
That's like developing punch cards to the point where the holes are microscopic and can also store terabytes of data. It's almost Steampunk-y.
This is a most excellent place for technology news and articles.
It never ceases to amaze me how far we can still take a piece of technology that was invented in the 50s.
That's like developing punch cards to the point where the holes are microscopic and can also store terabytes of data. It's almost Steampunk-y.
Solid state is kinda like a microscopic punch card.
More like microscopic fidget bubble poppers.
When the computer wants a bit to be a 1, it pops it down. When it wants it to be a 0, it pops it up.
If it were like a punch card, it couldn’t be rewritten as writing to it would permanently damage the disc. A CD-RW is basically a microscopic punch card though, because the laser actually burns away material to write the data to the CD.
That's how most technology is:
Almost everything we have today is due to incremental improvements from something much older.
I can't wait for datacenters to decommission these so I can actually afford an array of them on the second-hand market.
Home Petabyte Project here I come (in like 3-5 years 😅)
Exactly, my nas is currently made up of decommissioned 18tb exos. Great deal and I can usually still get them rma’d the handful of times they fail
Where is a good place to search for decommissioned ones?
Serverpartdeals has done me well, drives often come new enough that they still have a decent amount of manufacturers warranty remaining (exos is 5yr) and depending on the drive you buy from them spd will rma a drive for 5 years from purchase (but not always, depends on the listing, read the fine print).
I have gotten 2 bad drives from them out of 18 over 5 years or so. Both bad drives were found almost immediately with basic maintenance steps prior to adding to the array (zeroing out the drives, badblocks) and both were rma’d by seagate within 3-5 days because they were still within the mfr warranty.
If you’re running a gigantic raid array like me (288tb and counting!) it would be wise to recognize that rotational hard drives are doomed and you need a robust backup solution that can handle gigantic amounts of data long term. I have a tape drive for that because I got it cheap at an electronics recycler sold as not working (thankfully it was an easy fix) but this is typically a super expensive route. If you only have like 20tb then you can look into stuff like cloud services, bluray, redundant hard drive, etc. or do like I did in the beginning and just accept that your pirated anime collection might go poof one day lol
30/32 = 0.938
That’s less than a single terabyte. I have a microSD card bigger than that!
;)
radarr goes brrrrrr
sonarr goes brrrrrr…
barrrr?
...dum tss!
My first HDD had a capacity of 42MB. Still a short way to go until factor 10⁶.
My first HD was a 20mb mfm drive :). Be right back, need some “just for men” for my beard (kidding, I’m proud of it).
So was mine, but the controller thought it was 10mb so had to load a device driver to access the full size.
Was fine until a friend defragged it and the driver moved out of the first 10mb. Thereafter had to keep a 360kb 5¼" drive to boot from.
That was in an XT.
Was fine until a friend defragged it and the driver moved out of the first 10mb
Oh noooo 😭
Everybody taking shit about Seagate here. Meanwhile I've never had a hard drive die on me. Eventually the capacity just became too little to keep around and I got bigger ones.
Oldest I'm using right now is a decade old, Seagate. Actually, all the HDDs are Seagate. The SSDs are Samsung. Granted, my OS is on an SSD, as well as my most used things, so the HDDs don't actually get hit all that much.
I've had a Samsung SSD die on me, I've had many WD drives die on me (also the last drive I've had die was a WD drive), I've had many Seagate drives die on me.
Buy enough drives, have them for a long enough time, and they will die.
Seagate had some bad luck with their 3TB drives about 15 years ago now if memory serves me correctly.
Since then Western Digital (the only other remaining HDD manufacturer) pulled some shenanigans with not correctly labeling different technologies in use on their NAS drives that directly impacted their practicality and performance in NAS applications (the performance issues were particularly agregious when used in a zfs pool)
So basically pick your poison. Hard to predict which of the duopoly will do something unworthy of trusting your data upon, so uh..check your backups I guess?
Seagate. The company that sold me an HDD which broke down two days after the warranty expired.
No thanks.
laughing in Western Digital HDD running for about 10 years now
I had the opposite experience. My Seagates have been running for over a decade now. The one time I went with Western Digital, both drives crapped out in a few years.
This is for cold and archival storage right?
I couldn't imagine seek times on any disk that large. Or rebuild times....yikes.
up your block size bro 💪 get them plates stacking 128KB+ a write and watch your throughput gains max out 🏋️ all the ladies will be like🙋♀️. Especially if you get those reps sequentially it's like hitting the juice 💉 for your transfer speeds.
Definitely not for either of those. Can get way better density from magnetic tape.
They say they got the increased capacity by increasing storage density, so the head shouldn't have to move much further to read data.
You'll get further putting a cache drive in front of your HDD regardless, so it's vaguely moot.
Avoid these like the plague. I made the mistake of buying 2 16 TB Exos drives a couple years ago and have had to RMA them 3 times already.
Lmao the HDD in the first machine I built in the mid 90s was 1.2GB
My dad had a 286 with a 40MB hard drive in it. When it spun up it sounded like a plane taking off. A few years later he had a 486 and got a 2gb Seagate hard drive. It was an unimaginable amount of space at the time.
The computer industry in the 90s (and presumably the 80s, I just don't remember it) we're wild. Hardware would be completely obsolete every other year.
Just one would be a great backup, but I’m not ready to run a server with 30TB drives.
I'm here for it. The 8 disc server is normally a great form factor for size, data density and redundancy with raid6/raidz2.
This would net around 180TB in that form factor. Thats would go a long way for a long while.
Just a reminder: These massive drives are really more a "budget" version of a proper tape backup system. The fundamental physics of a spinning disc mean that these aren't a good solution for rapid seeking of specific sectors to read and write and so forth.
So a decent choice for the big machine you backup all your VMs to in a corporate environment. Not a great solution for all the anime you totally legally obtained on Yahoo.
Not sure if the general advice has changed, but you are still looking for a sweet spot in the 8-12 TB range for a home NAS where you expect to regularly access and update a large number of small files rather than a few massive ones.
HDD read rates are way faster than media playback rates, and seek times are just about irrelevant in that use case. Spinning rust is fine for media storage. It's boot drives, VM/container storage, etc, that you would want to have on an SSD instead of the big HDD.
Not sure what you're going on about here. Even these discs have plenty of performance for read/wrote ops for rarely written data like media. They have the same ability to be used by error checking filesystems like zfs or btrfs, and can be used in raid arrays, which add redundancy for disc failure.
The only negatives of large drives in home media arrays is the cost, slightly higher idle power usage, and the resilvering time on replacing a bad disc in an array.
Your 8-12TB recommendation already has most of these negatives. Adding more space per disc is just scaling them linearly.
Additionally, most media is read in a contiguous scan. Streaming media is very much not random access.
Your typical access pattern is going to be seeking to a chunk, reading a few megabytes of data in a row for the streaming application to buffer, and then moving on. The ~10ms of access time at the start are next to irrelevant. Particularly when you consider that the OS has likely observed that you have unutilized RAM and loads the entire file into the memory cache to bypass the hard drive entirely.
The fundamental physics of a spinning disc mean that these aren't a good solution for rapid seeking of specific sectors to read and write and so forth.
It's no ssd but is no slower than any other 12TB drive. It's not shingled but HAMR. The sectors are closer together so it has even better seeking speed than a regular 12TB drive.
Not a great solution for all the anime you totally legally obtained on Yahoo.
????
It's absolutely perfect for that. Even if it was shingled tech, that only slows write speeds. Unless you are editing your own video, write seek times are irrelevant. For media playback use only consistent read speed matters. Not even read seek matters except in extreme conditions like comparing tape seek to drive seek. You cannot measure 10 ms difference between clicking a video and it starting to play because of all the other delays caused by media streaming over a network.
But that's not even relevant because these have faster read seeking than older drives because sectors are closer together.
I’m real curious why you say that. I’ve been designing systems with high IOPS data center application requirements for decades so I know enterprise storage pretty well. These drives would cause zero issues for anyone storing and watching their media collection with them.
I thought I read somewhere that larger drives had a higher chance of failure. Quick look around and that seems to be untrue relative to newer drives.
One problem is that larger drives take longer to rebuild the RAID array when one drive needs replacing. You're sitting there for days hoping that no other drive fails while the process goes. Current SATA and SAS standards are as fast as spinning platters could possibly go; making them go even faster won't help anything.
There was some debate among storage engineers if they even want drives bigger than 20TB. The potential risk of data loss during a rebuild is worth trading off density. That will probably be true until SSDs are closer to the price per TB of spinning platters (not necessarily the same; possibly more like double the price).
If you're writing 100 MB/s, it'll still take 300,000 seconds to write 30TB. 300,000 seconds is 5,000 minutes, or 83.3 hours, or about 3.5 days. In some contexts, that can be considered a long time to be exposed to risk of some other hardware failure.
I mean, cool and all, but call me when sata or m2 ssds are 10TB for $250, then we'll talk.
Not sure whether we'll arrive there the tech is definitely entering the taper-out phase of the sigmoid. Capacity might very well still become cheaper, also 3x cheaper, but don't, in any way, expect them to simultaneously keep up with write performance that ship has long since sailed. The more bits they're trying to squeeze into a single cell the slower it's going to get and the price per cell isn't going to change much, any more, as silicon has hit a price wall, it's been a while since the newest, smallest node was also the cheapest.
OTOH how often do you write a terabyte in one go at full tilt.