Hello all,

I have recently bought an external 4tb drive for backups and having an image of another 2tb drive (in case it fails). The drives are used for cold storage (backups). I would like a prefference on the filesystem i should format it. From the factory, it comes with ntfs and that is ok but i wonder if it will be better with something like ext4. Being readable directly from windows won’t be necessary (although useful) since i could just temporarily turn on ssh on the linux machine (or a local vm) and start copying.

  • tiny@midwest.social
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    If your Linux distro is using btrfs you can format it to btrfs and use btrfs send for backups. Otherwise the filesystem shouldn’t be to big if a deal unless you want to restore files from a Windows machine. If that is the case use ntfs

    • wallmenis@lemmy.oneOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      I use fedora 40 kinoite which uses btrfs but i am not sure i trust it enough for this data. Also forgot to mention in original post that I had some problems when overwriting files in ntfs which caused corruption. Thankfully chkdsk on a windows machine fixed that but I wouldn’t like for that to happen again when backing up from a linux machine.

  • Skull giver@popplesburger.hilciferous.nl
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    Depending on your skill level, you may want to consider a deduplicating file system, like BTRFS or ZFS. That way, you can make copies of the source drive and deduplicate unchanged segments, making every copy after the first only take up a small percentage of the apparant disk size.

    I’ve personally used duperemove to deduplicate old disk images and it works very well in my experience.

    I wouldn’t use NTFS with Linux. The driver is stable enough that it doesn’t corrupt the file system anymore these days, but performance isn’t as good as alternatives.

  • kbal@fedia.io
    link
    fedilink
    arrow-up
    0
    ·
    6 months ago

    I’d use ext4 for that, personally. You might also consider using full-disk encryption (redhat example) if there’s going to be any data on there you wouldn’t want a burglar to have. Obviously it wouldn’t do much good if you don’t encrypt the other disk as well, but having a fresh one to try it out on makes things easier.

    • Bogasse@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      6 months ago

      Although it depends of the backup format :

      • If you store compressed tarballs they won’t be of any benefits.
      • If you copy whole directory as is, the filesystem-level compression and ability to deduplicate data (eg. with duperemove) are likely to save A LOT of storage (I’d bet on a 3 times reduction).
  • friend_of_satan@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    zfs is made for data integrity. I wouldn’t use anything else for my backups. If a file is corrupted, it will tell you which file when it encounters a checksum error while reading the file.

      • friend_of_satan@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        If there is a redundant block then it will auto recover and just report what happened. Redundancy can be set up with multiple disks or by having a single disk write blocks to multiple places by setting the “copies” property to more than 1.

      • refalo@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        6 months ago

        if you’re also using raidz or mirroring in zfs, then yes. it can also do encryption and deduplication