I’m writing a program that wraps around dd to try and warn you if you are doing anything stupid. I have thus been giving the man page a good read. While doing this, I noticed that dd supported all the way up to Quettabytes, a unit orders of magnitude larger than all the data on the entire internet.
This has caused me to wonder what the largest storage operation you guys have done. I’ve taken a couple images of hard drives that were a single terabyte large, but I was wondering if the sysadmins among you have had to do something with e.g a giant RAID 10 array.
Upgraded a NAS for the office. It was reaching capacity, so we replaced it. Transfer was maybe 30 TB. Just used rsync. That local transfer was relatively fast. What took longer was for the NAS to replicate itself with its mirror located in a DC on the other side of the country.
Yeah it’s kind of wild how fast (and stable) rsync is, especially when you grew up with the extremely temperamental Windows copying thing, which I’ve seen fuck up a 50mb transfer before.
The biggest one I’ve done in one shot with rsync was only about 1tb, but I was braced for it to take half a day and cause all sorts of trouble. But no, it just sent it across perfectly first time, way faster than I was expecting.
Yeah, shout out for rsync also. It’s awesome. Combine it with ssh & it feels pretty secure too.
Never dealt with windows. rsync just makes sense. I especially like that its idempotent, so I can just run it twice or three times and it’ll be near instant on the subsequent run.