First off, I’d normally ask this question on a datahoarding forum, but this one is way more active than those and I’m sure there’s considerable overlap.
So I have a Synology DS218+ that I got in 2020. So it’s a 6 year old model by now but only 4 into its service. There’s absolutely no reason to believe it’ll start failing anytime soon, and it’s completely reliable. I’m just succession planning.
I’m looking forward to my next NAS, wondering if I should get the new version of the same model again (whenever that is) or expand to a 4 bay.
The drives are 14 TB shucked easy stores, for what it’s worth, and not even half full.
What are your thoughts?
I’ve bought a Synology DS415+ back in December 2014. So it just turned 9 and it’s still kicking. (Even with the C2000 fix.)
Although Synology stopped delivering updates, I’ll keep it as long as it does what I need it to. However, my next device will be a TerraMaster where I’ll install OMV on. Can’t get a NAS with custom OS in a smaller form factor.
As I can’t predict the future I believe you should juice it until the last drop, that way at least when it fails you wouldn’t regret it… pray&luck 🙏🤞
I’m still running a DS414 filled with WD Red drives. I’ve only swapped out one of the drives as it was starting to have issues. I’ve considered upgrading for more features (Docker, Home Assistant etc) but can’t justify the expense just for “nice to have” instead of “need”. Realistically it only stores Linux ISOs that I get with Download Station.
Yeah, performance is not an issue for me. I stream some Linux ISOs and so do a few friends. Pihole, photos backup, documents backup. That’s about it.
I’d be more concerned about the longevity of the drives than any NAS itself. I moved from commercial NAS appliances to a self-built one. It turns out that they cost about the same (depending on the hardware configuration you end up choosing, evidently), but are MUCH better performance-wise.
The NAS will most likely outlive the software support and by far the HDDs you are putting in them.
I had my DS213+ for a bit over 10 years, with no failures of any kind, just a bit of drive swapping for more storage space. Finally upgraded last year to a 4-bay with better performance and Docker support, but I would have kept using it otherwise.
Your NAS will last as long as your storage medium.
HDD lasts 5-10 years, SSD lasts like 10+
Not the batch of WD Red SSDs I got in 2022. 3 of the 4 have failed. I’m assuming the 4th is going to die any day now. Fortunately WD honors their warranties, and only one drive died at a time so the my RAID was able to stay intact.
I feel like I must have gotten 4 from the same bad batch or something. One dying felt like bad luck, but when another died every 3 months it seemed like more than a coincidence. And none of the replaced ones have died, just the original batch.
So how long does an SSD last? YMMV.
Still running a ds210+ i bought second hand about 8 years ago… Hosts a website and downloads torrents… Not much else. Think it’s about time i upgraded.
I’d say 6-12 years. Maybe including about 1 hard disk failing. I forgot what the mean to failure is for a harddisk. And in a decade I probably have all the disks filled to the brim, my usage pattern changed and a new one has 10x the network speed, 4x more storage and is way faster in every aspect.
I built my 10ish TB (usable after raidz2) system in 2015. I did some drive swaps but I think it might have actually been a shoddy power cable that was the problem and the disks may have been fine.
What do you mean by “last”? I know it’s a common term, but when you dig deeper, you’ll see why it doesn’t really make sense. For this discussion, I’m assuming you mean “How long until I need to buy a newer model?”
First, consider the reasons you might have for buying a newer model. The first is hardware failure. Second is obsolescence - the device cannot keep up with newer needs, such as speed, capacity, or interface. The third is insecurity/unsupported from the vendor.
The last one is easy enough to check from a vendor’s product lifecycle page. I’ll assume this isn’t what you’re concerned about. Up next is obsolescence. Obviously it meets your needs today, but only you can predict your future needs. Maybe it’s fine for a single 1080p* stream today, and that’s all you use it for. It will continue to serve that purpose forever. But if your household grows and suddenly you need 3x 4k streams, it might not keep up. Or maybe you’ll only need that single 1080p stream for the next 20 years. Maybe you’ll hit drive capacity limits, or maybe you won’t. We can’t answer any of that for you.
That leaves hardware failure. But electronics don’t wear out (mechanical drives do, to an extent, but you asked about the NAS). They don’t really have an expected life span in the same way as a car battery or an appliance. Instead, they have a failure rate. XX% fail in a given time frame. Even if we assume a bathtub curve (which is a very bold assumption), the point where failures climb is going to be very unclear. The odds are actually very good that it will keep working well beyond that.
Also of note, very few electronics fail before they are obsolete.
*Technically it’s about bitrate, but let’s just ignore that detail for simplicity. We’ll assume that 4k uses 4x as much space as 1080p
TL;DR: It could fail at any moment from the day it was manufactured, or it could outlast all of us. Prepare for that scenario with a decent backup strategy, but don’t actually replace it until needed.
I’ve got a DS416 that I’ve had for almost a decade, and its still going strong. Worst thing I’ve had to deal with is a shucked easystore drive that died, but the other 3 are running fine.
Were you able to swap in a new one and copy everything back over?
I’ve had my Synology DS215 for almost ten years. I’ve recently thought about replacing it, but I don’t really see the benefit. I’ll just replace the drives some time.
Both DS220+ and DS224+ has been a pleasure to setup, but I wouldn’t replace your DS218+ just because. Just make sure your RAID is healthy and your backup too.
An alternative to a standalone NAS is to setup your own little ITX server. Only if you enjoy tinkering though, Synology is definitely easier.
At home I’m currently running Server/NAS/Gaming PC all in one.
It’s a Debian 12 KVM/QEMU host with an m.2 NVME disk for host OS + VM OS and 2x16TB Seagate Exos disks in RAID1 for data storage. The other hardware is a B650 ITX Motherboard, AMD Ryzen 7600 CPU, 2x32GB DDR5 RAM and AMD Radeon 6650 XT, Seasonic FOCUS PX 750W PSU.
With my KVM/QEMU host, Game Server and Jellyfin Server online it eats about 60W-65W, so not that bad.
The GPU and an USB Controller is passed through with VFIO to a virtual Fedora that I use as a desktop and gaming client.
Just make sure to have a sound dampening pc case so you can keep the servers online without being bothered. The GPU goes silent when the gaming VM is off.shucked
oh you are dancing with the devil. not sure there’s a way to check actual SMART data in Synology’s OS but I would be very interested in those logs.
I’ve found over the years that the second I think about backing up the drive is about to fail.
I would update to a 4bay and invest in actual NAS drives. (and I will personally be looking for 10gbe lan but this isn’t homelab)
There’s nothing wrong with shucked drives, and they are frequently relabelled NAS drives anyway.
just a dice you don’t need to roll
Packaging a drive for sale in an external enclosure doesn’t make it any more prone to failure compared to one that wasn’t.
except you don’t know what you’re buying.
the fact it’s typically cheaper than buying the naked drive should tell you everything you need to know about the risk involved.
You have an idea of what you’re buying and you know what you have once you’ve shucked it. The worst case scenario is that it’s not what you expected, isn’t suited for that use case, you can’t find another use for it, and you can’t return it… but it’s not like anyone is forcing you to add an unsuitable drive to your setup.
There is not even any proof from any independent media that special certified drives have a longer lifespan. You can see it when you compare OEM prices for different drives. Quite often Data Center labeled cards are more expensive then the prosumer drives, because consumers are idiots and buy into marketing.
There are other problems with shucking like warranty but the dice role is not certainly it.
That the market buying internal drives is generally willing to pay more for the product vs the people buying an external drive? Because cost of the parts (AKA Bill of Materials, or BOM) is only a small part of what determines the price on the shelf.
The fact the WD has a whole thing about refusing to honor the warranty (likely in violation of the Magnuson-Moss Warranty Act) should tell you what you really need to know.
This is misinformation, I have always known what drives to expect when shucking. Not only that, but you can tell what drive is inside just by plugging it in before shucking to check. I’ve shucked over 16 drives so far and all were exactly as expected.
The drives for WD are white label, but they’re WD Reds. They’re cheaper because they’re consumer facing, no more, no less. Have you been bitten by shucking in the past? I’m confused why else you’d be saying it’s a risk. The only risk associated is warranty related.
People have tested them long term at this point. Outside of a few rare exceptions, there’s not a noticeable difference in reliability between shucked drives and ‘normal’ drives. They’re the same stock but just rebranded and have to be cheaper because they’re marketed primarily for retail as opposed to enthusiast/enterprise who are willing to pay more.