Can’t wait to see this bad boy on serverpartdeals in a couple years if I’m still alive
if I’m still alive
That goes without saying, unless you anticipate something.
Finally, a hard drive which can store more than a dozen modern AAA games
finally i’ll be able to self-host one piece streaming
my qbittorrent is gonna love that
Great, can’t wait to afford it in 60 years.
What is the usecase for drives that large?
I ‘only’ have 12Tb drives and yet my zfs-pool already needs ~two weeks to scrub it all. With something like this it would literally not be done before the next scheduled scrub.
It’s like the petronas towers, everytime they’re finished cleaning the windows they have to start again
Data centers???
Sounds like something is wrong with your setup. I have 20TB drives (x8, raid 6, 70+TB in use) … scrubbing takes less than 3 days.
Jesus, my pool takes a little over a day, but I’ve only got around 100 gb how big is your pool?
The pool is about 20 usable TB.
Something is very wrong if it’s taking 2 weeks to scrub that.
High capacity storage pools for enterprises.
Space is at a premium. Saving space should/could equal to better pricing/availability.Not necessarily.
The trouble with spinning platters this big is that if a drive fails, it will take a long time to rebuild the array after shoving a new one in there. Sysadmins will be nervous about another failure taking out the whole array until that process is complete, and that can take days. There was some debate a while back on if the industry even wanted spinning platters >20TB. Some are willing to give up density if it means less worry.
I guess Seagate decided to go ahead, anyway, but the industry may be reluctant to buy this.
I would assume with arrays they will use a different way to calculate parity or have higher redundancy to compensate the risk.
If there’s higher redundancy, then they are already giving up on density.
We’ve pretty much covered the likely ways to calculate parity.
I worked on a terrain render of the entire planet. We were filling three 2 Tb drives a day for a month. So this would have been handy.
There is an enterprise storage shelf (aka a bunch of drives that hooks up to a server) made by Dell which is 1.2 PB (yes petabytes). So there is a use, but it’s not for consumers.
That’s a use-case for a fuckton of total capacity, but not necessarily a fuckton of per-drive capacity. I think what the grandparent comment is really trying to say is that the capacity has so vastly outstripped mechanical-disk data transfer speed that it’s hard to actually make use of it all.
For example, let’s say you have these running in a RAID 5 array, and one of the drives fails and you have to swap it out. At 190MB/s max sustained transfer rate (figure for a 28TB Seagate Exos; I assume this new one is similar), you’re talking about over two days just to copy over the parity information and get the array out of degraded mode! At some point these big drives stop being suitable for that use-case just because the vulnerability window is so large that the risk of a second drive failure causing data loss is too great.
Thats exactly what I wanted to say, yes :D.
I get it. But the moment we invoke RAID, or ZFS, we are outside what standard consumers will ever interact with, and therefore into business use cases. Remember, even simple homelab use cases involving docker are well past what the bulk of the world understands.
I would think most standard consumers are not using HDDs at all.
What drives do you have exactly? I have 7x6TB WD Red Pro drives in raidz2 and I can do a scrub less than 24 hours.
I have 2*12TB whitelabel WD drives (harvested from external drives but Datacenter drives accourding to the SN) and one 16 TB Toshiba white-label (purchased directly also meant for datacenters) in a raidz1.
How full is your pool? I have about 2/3rds full which impacts scrubbing I think. I also frequently access the pool which delays scrubbing.
It’s like 90% full, scrubbing my pool is always super fast.
Two weeks to scrub the pool sounds like something is wrong tbh.
What’s scrubbing for?
A ZFS Scrub validates all the data in a pool and corrects any errors.
I’m not in the know of having your own personal data centers so I have no idea. … But how often is this necessary? Does accessing your own data on your hard drive require a scrub? I just have a 2tb on my home pc. Is the equivalent of a scrub like a disk clean up?
You usually scrub you pool about once a month, but there are no hard rules on that. The main problem with scrubbing is, that it puts a heavy load on the pool, slowing it down.
Accessing the data does not need a scrub, it is only a routine maintenance task. A scrub is not like a disk cleanup. With a disk cleanup you remove unneeded files and caches, maybe de-fragment as well. A scrub on the other hand validates that the data you stored on the pool is still the same as before. This is primarily to protect from things like bit rot.
There are many ways a drive can degrade. Sectors can become unreadable, random bits can flip, a write can be interrupted by a power outage, etc. Normal file systems like NTFS or ext4 can only handle this in limited ways. Mostly by deleting the corrupted data.
ZFS on the other hand is built using redundant storage. Storing the data spread over multiple drives in a special way allowing it to recover most corruption and even survive the complete failure of a disk. This comes at the cost of losing some capacity however.
Thank you for all this information. One day when my ADHD forces me into a making myself a home server I’ll remember this and keep it in mind. I’ve always wanted to store movies but these days just family pictures and stuff. Definitely don’t have terabytes but I’m getting up 100s of gb.
It’s to play Ark: Survival Evolved.
there was a time i asked this question about 500 megabytes
I am not questioning the need for more storage but the need dor more storage without increased speeds.
I too, am old.
I’m older than that but didn’t want to self report. the first hard disk i remember my father buying was 40mb.
I remember renting a game, and it was on a high density 5.25" inch floppy at a whopping 1.2MB; but or family computer only had a standard density 5.25".
So we went to the neighbors house, who was one of the first computer nerds (I’m not sure he’s still alive now), who copied the game to a 3.5" high density 1.44MB disk, then we returned the rental because we couldn’t play it on the 1.2 MB HD 5.25" floppy.
… And that was the first time I was party to piracy.
Me who stores important data on seagate external HDD with no backup reading the comments roasting seagate:
Uh oh!!! Uh oh uh oh uh oh uh oh
I’m amazed it’s only $800. I figured that shit was gonna be like 8-10 thousand.
Well, it’s a Seagate, so it still comes out to about a hundred bucks a month.
Why do you wound me like this?
Yeah, I expected it to level out around $800 after a few years, not out of the gate. 20TB are still $300 ish new.
This hard drive is so big that when it sits around the house, it sits around the house.
This hard drive is so big when it moves, the Richter scale picks it up.
This hard drive is so big when it backs up it makes a beeping sound.
This hard drive is so big, when I tried to weigh it the scale just said “one at a time please”.
This hard drives so big, that two people can access it at the same time and never meet.
This hard drive is so big, that astronomers thought it was a planet.
Is it worth replacing within a year only to be sent a refurbished when it dies?
Use redundancy. Don’t be a pleb.
That’s a lot of porn. And possibly other stuff, too.
Nah, the other stuff will all fit on your computer’s hard drive, this is only for porn. They should call it the Porn Drive.
And possibly other stuff, too.
Ehhh don’t test me
It isn’t as much as you think, high resolution, high bitrate video files are pretty large.
Especially VR files
I’m gonna need like 6 of these
monkey’s paw curls They’re SMR
Seems fine with a couple TB of SSDs to act as active storage with regular rsyncs back to the HDDs. This is fine.
The first copy of anything big will suck ass… and why else would you get a 36TB drive if not to copy a lot of data to it?
My primary storage use-case is physical media backups. I literally don’t care how long it takes to store, a bluray is 70GB and I’ve got around 200 of em to backup.
Does it really matter that much if the first copy takes a while though? Only doing it once and you don’t even have to do it all in 1 go. Just let it run over the weekend would do though.
It matters to me. I got stuff to back up regularly, and I ain’t got all weekend.
It’s only the first copy that takes such a long time. After that you only copy the changes.
That depends entirely on your usecase.
Sorry but without a banana for scale it’s hard to tell how big it really is
36 Typical Bananas
28 plantains
That quite large then.
I wonder how many pictures of nude bananas you could fit inside??
Depending on the quality you want to deal with, at least 3.
Is Seagate still producing shitty drives that fail a few days after the warranty expired?
Some models are quite a bit worse than average while some are on bar with competition
Mine have been going strong for five years. Ironwolf Pros.
Hey, they told you how long they expected it to last 😅
Fair point. But still pretty bad. Literally two days after the warranty expired my Seagate drive was broken. This was my first and only Seagate drive. Never again.
Meanwhile my old Western Digital drive is still kicking way beyond it’s warranty. Almost 10 years now.
no thanks Seagate. the trauma of losing my data because of a botched firmware with a ticking time bomb kinda put me off your products for life.
see you in hell.
Some of Seagate’s drives have terrible scores on things like Blackblaze. They are probably the worst brand, but also generally the cheapest.
I have been running a raid of old Seagate barracuda’s for years at things point, including a lot of boot cycles and me forcing the system off because Truenas has issues or whatnot and for some fucking reason they won’t die.
I have had a WD green SSD that I use for Truenas boot die, I had some WD external drive have its controller die (the drive inside still work) and I had some crappy WD mismatched drives in a raid 0 for my Linux ISO’s and those failed as well.
Whenever the Seagate start to die, I guess ill be replacing them with Toshiba’s unless somebody has another suggestion.
I had a similar experience with Samsung. I had a bunch of evo 870 SSDs up and die for no reason. Turns out, it was a firmware bug in the drive and they just need an update, but the update needs to take place before the drive fails.
I had to RMA the failures. The rest were updated without incident and have been running perfectly ever since.
I’d still buy Samsung.
I didn’t lose a lot of data, but I can certainly understand holding a grudge on something like that. From the other comments here, hate for Seagate isn’t exactly rare.
I can certainly understand holding grudges against corporations. I didn’t buy anything from Sony for a very long time after their fuckery George Hotz and Nintendo’s latest horseshit has me staying away from them, but that was a single firmware bug that locked down hard drives (note, the data was still intact) a very long time ago. Seagate even issued a firmware update to prevent the bug from biting users it hadn’t hit yet, but firmware updates at the time weren’t really something people thought to ever do, and operating systems did not check for them automatically back then like they do now.
Seagate fucked up but they also did everything they could to make it right. That matters. Plus, look at their competition. WD famously lied about their red drives not being SMR when they actually were. And I’ve only ever had WD hard drives and sandisk flash drives die on me. And guess who owns sandisk? Western Digital!
I guess if you must go with a another company, there’s the louder and more expensive Toshiba drives but I have never used those before so I know nothing about them aside from their reputation for being loud.
And I’ve only ever had WD hard drives and sandisk flash drives die on me
Maybe it’s confirmation bias but almost all memory that failed on me has been sandisk-flash storage. Zhe only exception being a corsair ssd which failed after 3 yrs as the main laptop drive + another 3 as a server boot and log-drive.
Every manufacturer has made a product that failed.
but not every manufacturer has had class action lawsuits filed against their continued shitty products.
Can someone recommend me a hard drive that won’t fail immediately? Internal, not SSD, from which cheap ones will die even sooner, and I need it for archival reasons, not speed or fancy new tech, otherwise I have two SSDs.
I think refurbished enterprise drives usually have a lot of extra protection hardware that helps them last a very long time. Seagate advertises a mean time to failure on their exos drives of ~200 years with a moderate level of usage. I feel like it would almost always be a better choice to get more refurbished enterprise drives than fewer new consumer drives.
I personally found an 8tb exos on servedpartdeals for ~$100 which seems to be in very good condition after checking the SMART monitoring. I’m just using it as a backup so there isn’t any data on it that isn’t also somewhere else, so I didn’t bother with redundancy.
I’m not an expert, but this is just from the research I did before buying that backup drive.
My WD Red Pros have almost all lasted me 7+ years but the best thing (and probably cheapest nowadays) is a proper 3-2-1 backup plan.
If you’re relying on one hard drive not failing to preserve your data you are doing it wrong from the jump. I’ve got about a dozen hard drives in play from seagate and WD at any given time (mostly seagate because they’re cheaper and I don’t need speed either) and haven’t had a failure yet. Backblaze used to publish stats about the hard drives they use, not sure if they still do but that would give you some data to go off. Seagate did put out some duds a while back but other models are fine.
The back blaze stats were always useless because they would tell you what failed long after that run of drives was available.
There are only 3 manufactures at this point so just buy one or two of each color and call it a day. ZFS in raid z2 is good enough for most things at this point.
Hard drives aren’t great for archival in general, but any modern drive should work. Grab multiple brands and make at least two copies. Look for sales. Externals regularly go below $15/tb these days.
Word for the wise, those externals usually won’t last 5+ years of constant use as an internal.
Elaborate please?
https://www.eevblog.com/forum/chat/whats-behind-the-infamous-seagate-bsy-bug/
this thread has multiple documented instances of poor QA and firmware bugs Seagate has implemented at the cost of their own customers.
my specific issue was even longer ago, 20+ years. there was a bug in the firmware where there was a buffer overflow from an int limit on runtime. it caused a cascade failure in the firmware and caused the drive to lock up after it ran for the maximum into limit. this is my understanding of it anyway.
the only solution was to purchase a board online for the exact model of your HDD and swap it and perform a firmware flash before time ran out. I think you could also use a clip and force program the firmware.
at the time a new board cost as much as a new drive, finances of which I didn’t have at the time.
eventually I moved past the 1tb of data I lost, but I will never willingly purchase another Seagate.
In my case, 10+years ago I had 6 * 3tb Seagate disks in a software raid 5. Two of them failed and it took me days to force it back into the raid and get some of the data off. Now I use WD and raid 6.
I read 3 or 4 years ago that it was just the 3tb reds I used had a high failure rate but I’m still only buying WDs
I had a single red 2TB in an old tivo roamio for almost a decade.
Pulled out this weekend, and finally tested it. Failed.
I was planning to move my 1.5T music collection to it. Glad I tested it first, lol.
Thanks, yeah that makes sense.
but then wd and their fake red nas drives with smr tech?
what else we have?
Wait… fake? I just bought some of those.
they were selling wd red (pro?) drives with smr tech, which is known to be disastrous for disk arrays because both traditional raid and zfs tends to throw them out. the reason for that is when you are filling it up, especially when you do it quickly, it won’t be able to process your writes after some time, and write operations will take a very long time, because the disk needs to rearrange its data before writing more. but raid solutions just see that the drive is not responding to the write command for a long time, and they think that’s because the drive is bad
they were selling wd red (pro?) drives with smr tech
Didn’t they used to have only one “Red” designation? Or maybe I’m hallucinating. I thought “Red Pro” was introduced after that curfuffel to distinguish the SMR from the CMR.
I’ve had a couple random drop from my array recently, but they were older so I didn’t think twice about it. Does this permafry them or can you remove from the array and reinitiate for it to work?
well, it depends. if they were dropped just because they are smr and were writing slowly, I think they are fine. but otherwise…
what array system do you use? some raid software, or zfs?
Windows Server storage solutions. I took them out of the array and they still weren’t recognized in Disk Management so I assume they’re shot. It was just weird having 2 fail the same way.