I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.
This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.
As a single file? Likely 20GB iso.
As a collective job, 3TB of videos between hard drives for Jellyfin.deleted by creator
@data1701d downloading forza horizon 5 on Steam with around 120gb is the largest web-download, I can remember. In LAN, I’ve migrated my old FreeBSD NAS to my new one, which was a roughly 35TB transfer over NFS.
How long did that 35TB take? 12 hours or so?
Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.
Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.
10TB is child’s play
brother?..
+1
From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.
For a concrete cap, I’d say 256 tebibytes…
We have DBs in the dozens of TB at work so probably one of them
I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd files (geophysics stuff), and they were stored in this server cluster that was mounted in a shipping container, and some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.
Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.
4G?! That strikes fear into my heart!
The alternative was 5mbit/s VSAT. 4G was a luxury at that time.
At the rates I’m paying for 4G data, there are very few places in the world where it wouldn’t be cheaper for me to get on a plane and sneakernet that much data
~15TB over the internet via 30Mbps uplink without any special considerations. Syncthing handled any and all network and power interruptions. I did a few power cable pulls myself.
I think it’s crazy that not that long ago 30mbps was still pretty good, we now have 1gbps+ at residential addresses and it fairly common too
I’ve got symmetrical gigabit in my apartment, with the option to upgrade to 5 or 8. I’d have to upgrade my equipment to use those speeds, but it’s nice to know I have the option.
Fiber is so nice
Yeah, I also moved from 30Mb upload to 700Mb recently and it’s just insane. It’s also insane thinking I had a symmetric gigabit connection in Eastern Europe in the 2000s for fairly cheap. It was Ethernet though, not fiber. Patch cables and switches all the way to the central office. 🫠
Most people in Canada today have 50Mb upload at the most expensive connection tiers - on DOCSIS 3.x. Only over the last few years fiber began becoming more common but it’s still fairly uncommon as it’s the most expensive connection tier if at all available.
We might pay some of the most expensive internet in the world in Canada but at least we can’t fault them for providing an unstable or unperformqnt service. Download llama models is where 1gbps really shines, you see a 7GB model? It’s done before you are even back from the toilet. Crazy times.
I should have know that the person on the internet noting 30Mbps was pretty good till recently is a fellow Canadian. 🍁 #ROBeLUS
BTW, TekSavvy recently started offering fiber seemingly on Bell’s last mile.
How long did that take? A month or two? I’ve backfilled my NAS with about 40 TB before over a 1 gig fiber pipe in about a week or so of 24/7 downloading.
Yeah, something like that. I verified it it with rsync after that, no errors.
Not that big by today’s standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.
Yep, downloaded XP over 33.6k modem, but I’m in NZ so 33.6 was more advertising than reality, it took weeks.
I remember downloading the scene on American Pie where Shannon Elizabeth strips naked over our 33.6 link and it took like an hour, at an amazing resolution of like 240p for a two minute clip 😂
Totally worth it.
And then you busted after 15 seconds?
Currently pushing about 3-5 TB of images to AI/ML scanning per day. Max we’ve seen through the system is about 8 TB.
Individual file? Probably 660 GB of backups before a migration at a previous job.
Largest one I ever did was around 4.something TB. New off-site backup server at a friends place. Took me 4 months due to data limits and an upload speed that maxed out at 3MB/s.
In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.
Don’t stay in school, kids.
You should have said no to math, it’s a helluva drug
golden 😂😂
i’ve transferred 10’s of ~300 GB files via manual
rsync
s. it was a lot of binary astrophysical data, most of which was noise. eventually this was replaced by an automated service that bypassed local firewalls with internet-based transfers and aws stuff.I mean dd claims they can handle a quettabyte but how can we but sure.
dd can’t really handle quettabytes! GNU has taken us all for fools! Alert the masses! Wake up sheeple!
dd if=/dev/zero of=/dev/null status=progress
Around 15 TB migrating to a new NAS.
Rsynced 4.2TB of data from one server to another but with multiple files