I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

  • slazer2au@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    As a single file? Likely 20GB iso.
    As a collective job, 3TB of videos between hard drives for Jellyfin.

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.

    Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd files (geophysics stuff), and they were stored in this server cluster that was mounted in a shipping container, and some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.

    Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.

  • Avid Amoeba@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    3 months ago

    ~15TB over the internet via 30Mbps uplink without any special considerations. Syncthing handled any and all network and power interruptions. I did a few power cable pulls myself.

    • Pasta Dental@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      I think it’s crazy that not that long ago 30mbps was still pretty good, we now have 1gbps+ at residential addresses and it fairly common too

      • Confused_Emus@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        I’ve got symmetrical gigabit in my apartment, with the option to upgrade to 5 or 8. I’d have to upgrade my equipment to use those speeds, but it’s nice to know I have the option.

      • Avid Amoeba@lemmy.ca
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Yeah, I also moved from 30Mb upload to 700Mb recently and it’s just insane. It’s also insane thinking I had a symmetric gigabit connection in Eastern Europe in the 2000s for fairly cheap. It was Ethernet though, not fiber. Patch cables and switches all the way to the central office. 🫠

        Most people in Canada today have 50Mb upload at the most expensive connection tiers - on DOCSIS 3.x. Only over the last few years fiber began becoming more common but it’s still fairly uncommon as it’s the most expensive connection tier if at all available.

        • Pasta Dental@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          We might pay some of the most expensive internet in the world in Canada but at least we can’t fault them for providing an unstable or unperformqnt service. Download llama models is where 1gbps really shines, you see a 7GB model? It’s done before you are even back from the toilet. Crazy times.

          • Avid Amoeba@lemmy.ca
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            3 months ago

            I should have know that the person on the internet noting 30Mbps was pretty good till recently is a fellow Canadian. 🍁 #ROBeLUS

            BTW, TekSavvy recently started offering fiber seemingly on Bell’s last mile.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      How long did that take? A month or two? I’ve backfilled my NAS with about 40 TB before over a 1 gig fiber pipe in about a week or so of 24/7 downloading.

  • Davel23@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Not that big by today’s standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.

  • psmgx@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Currently pushing about 3-5 TB of images to AI/ML scanning per day. Max we’ve seen through the system is about 8 TB.

    Individual file? Probably 660 GB of backups before a migration at a previous job.

  • Yeahboiiii@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Largest one I ever did was around 4.something TB. New off-site backup server at a friends place. Took me 4 months due to data limits and an upload speed that maxed out at 3MB/s.

  • Neuromancer49@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.

    Don’t stay in school, kids.

  • Llituro [he/him, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    i’ve transferred 10’s of ~300 GB files via manual rsyncs. it was a lot of binary astrophysical data, most of which was noise. eventually this was replaced by an automated service that bypassed local firewalls with internet-based transfers and aws stuff.