I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.

This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.

  • Davel23@fedia.io
    link
    fedilink
    arrow-up
    64
    ·
    3 months ago

    Not that big by today’s standards, but I once downloaded the Windows 98 beta CD from a friend over dialup, 33.6k at best. Took about a week as I recall.

  • Urist@lemmy.ml
    link
    fedilink
    English
    arrow-up
    47
    ·
    3 months ago

    I obviously downloaded a car after seeing that obnoxious anti-piracy ad.

  • freijon@lemmings.world
    link
    fedilink
    arrow-up
    40
    ·
    3 months ago

    I’m currently backing up my /dev folder to my unlimited cloud storage. The backup of the file /dev/random is running since two weeks.

    • Mike1576218@lemmy.ml
      link
      fedilink
      arrow-up
      8
      ·
      3 months ago

      No wonder. That file is super slow to transfer for some reason. but wait till you get to /dev/urandom. That file hat TBs to transfer at whatever pipe you can throw at it…

      • PlexSheep@infosec.pub
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        /dev/random and other “files” in /dev are not really files, they are interfaces which van be used to interact with virtual or hardware devices. /dev/random spits out cryptographically secure random data. Another example is /dev/zero, which spits out only zero bytes.

        Both are infinite.

        Not all “files” in /dev are infinite, for example hard drives can (depending on which technology they use) be accessed under /dev/sda /dev/sdb and so on.

  • Neuromancer49@midwest.social
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    1
    ·
    3 months ago

    In grad school I worked with MRI data (hence the username). I had to upload ~500GB to our supercomputing cluster. Somewhere around 100,000 MRI images, and wrote 20 or so different machine learning algorithms to process them. All said and done, I ended up with about 2.5TB on the supercomputer. About 500MB ended up being useful and made it into my thesis.

    Don’t stay in school, kids.

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    34
    ·
    3 months ago

    Entire drive/array backups will probably be by far the largest file transfer anyone ever does. The biggest I’ve done was a measly 20TB over the internet which took forever.

    Outside of that the largest “file” I’ve copied was just over 1TB which was a SQL file backup for our main databases at work.

    • cbarrick@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 months ago

      +1

      From an order of magnitude perspective, the max is terabytes. No “normal” users are dealing with petabytes. And if you are dealing with petabytes, you’re not using some random poster’s program from reddit.

      For a concrete cap, I’d say 256 tebibytes…

  • ramble81@lemm.ee
    link
    fedilink
    arrow-up
    20
    ·
    3 months ago

    I’ve done a 1PB sync between a pair of 8-node SAN clusters as one was being physically moved since it’d be faster to seed the data and start a delta sync rather than try to do it all over a 10Gb pipe. M

      • Taleya@aussie.zone
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        3 months ago

        A small dcp is around 500gb. But that’s like basic film shizz, 2d, 5.1 audio. For comparison, the 3D deadpool 2 teaser was 10gb.

        Aspera’s commonly used for transmission due to the way it multiplexes. It’s the same protocolling behind Netflix and other streamers, although we don’t have to worry about preloading chunks.

        My laughter is mostly because we’re transmitting to a couple thousand clients at once, so even with a small dcp thats around a PB dropped without blinking

        • potajito@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          3 months ago

          Ahhh thanks for the reply! Makes sense! We also use Aspera here at work (videogames) but dont move that ammount, not even close.

        • daq@lemmy.sdf.org
          link
          fedilink
          arrow-up
          1
          ·
          3 months ago

          I used to work in the same industry. We transferred several PBs from West US to Australia using Aspera via thick AWS pipes. Awesome software.

          • Taleya@aussie.zone
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            3 months ago

            Hahahah did you enjoy Australian Internet? It’s wonderfully archaic

            (MPS, Delux, Gofilex or Qubewire?)

        • MoonMelon@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          In the early 2000s I worked on an animated film. The studio was in the southern part of Orange County CA, and the final color grading / print (still not totally digital then) was done in LA. It was faster to courier a box of hard drives than to transfer electronically. We had to do it a bunch of times because of various notes/changes/fuck ups. Then the results got courier’d back because the director couldn’t be bothered to travel for the fucking million dollars he was making.

  • neidu2@feddit.nl
    link
    fedilink
    arrow-up
    16
    ·
    edit-2
    3 months ago

    I don’t remember how many files, but typically these geophysical recordings clock in at 10-30 GB. What I do remember, though, was the total transfer size: 4TB. It was kind of like a bunch of .segd, and they were stored in this server cluster that was mounted in a shipping container for easy transport and lifting onboard survey ships. Some geophysics processors needed it on the other side of the world. There were nobody physically heading in the same direction as the transfer, so we figured it would just be easier to rsync it over 4G. It took a little over a week to transfer.

    Normally when we have transfers of a substantial size going far, we ship it on LTO. For short distance transfers we usually run a fiber, and I have no idea how big the largest transfer job has been that way. Must be in the hundreds of TB. The entire cluster is 1.2PB, bit I can’t recall ever having to transfer everything in one go, as the receiving end usually has a lot less space.

  • d00phy@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    I’ve migrated petabytes from one GPFS file system to another. More than once, in fact. I’ve also migrated about 600TB of data from D3 tape format to 9940.

  • Trigger2_2000@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    ·
    3 months ago

    I once abused an SMTP relay (my own) by emailing Novell a 400+ MB memory dump. Their FTP site kept timing out.

    After all that, and them swearing they had to have it, the OS team said “Nope, we’re not going to look at it”. Guess how I feel about Novell after that?

    This was in the mid-90’s.

  • HarriPotero@lemmy.world
    link
    fedilink
    arrow-up
    11
    ·
    3 months ago

    I worked at a niche factory some 20 years ago. We had a tape robot with 8 tapes at some 200GB each. It’d do a full backup of everyone’s home directories and mailboxes every week, and incremental backups nightly.

    We’d keep the weekly backups on-site in a safe. Once a month I’d do a run to another plant one town over with a full backup.

    I guess at most we’d need five tapes. If they still use it, and with modern tapes, it should scale nicely. Today’s LTO-tapes are 18TB. Driving five tapes half an hour would give a nice bandwidth of 50GB/s. The bottleneck would be the write speed to tape at 400MB/s.