Hi all!

I have a Debian stable server with two hdds in a md RAID which contains an encrypted ext-4 filesystem.

sda          8:0    0   2.7T  0 disk
├─sda1       8:1    0     1G  0 part                    
│ └─md0      9:0    0  1023M  0 raid1 /boot             
├─sda2       8:2    0   2.7T  0 part                    
│ └─md2      9:2    0   2.7T  0 raid1                   
│   └─mdcrypt                                           
│          253:0    0   2.7T  0 crypt /                 
└─sda3       8:3    0     1M  0 part
sdb          8:16   0   2.7T  0 disk                    
├─sdb1       8:17   0     1G  0 part
│ └─md0      9:0    0  1023M  0 raid1 /boot             
├─sdb2       8:18   0   2.7T  0 part
│ └─md2      9:2    0   2.7T  0 raid1                   
│   └─mdcrypt
│          253:0    0   2.7T  0 crypt /                 
└─sdb3       8:19   0     1M  0 part

I’d like to migrate that over to BTRFS to make use of deduplication and snapshots.

But I have no idea how to set it up since BTRFS has its own RAID-1 configuration. Should I rather use the existing MD array? Or should I take the drives out of the array, add encryption and then add the BTRFS RAID inside that?

Or should I do something else entirely?

  • UnfortunateShort
    link
    fedilink
    4
    edit-2
    10 hours ago

    If I had to do encrypted btrfs RAID from scratch, I would probably:

    1. Set up LUKS on both discs
    2. Unlock both
    3. Create a btrfs partition on one mapper
    4. Add the other with btfs device add /path/to/mapper /path/to/btrfs/part
    5. Balance with btrfs balance start -mconvert=raid1 -dconvert=raid1 /path/to/btrfs/part
    6. Add LUKS’ to crypttab, btrfs partition to fstab and rebuild/configure bootloader as necessary

    In that scenario, you would probably want to use a keyfile to unlock the other disc without rentering some password.

    Now, that’s from the top of my head and seems kinda stupidly complicated to me. iirc btrfs has a stable feature to convert ext4 to btrfs. It shouldn’t matter whatever happens outside, so you could take your chances and just try that on your ext volume

    (Edit: But to be absolutely clear: I would perform a backup first :D)

    • Björn TantauOP
      link
      fedilink
      110 hours ago

      Thanks, sounds good. I need the running system, so I’d first set up BTRFS on one disc, test it and then add the other disc.

  • @far_university190@feddit.org
    link
    fedilink
    English
    2
    edit-2
    12 hours ago

    luks encrypt both drive, raid 1 btrfs the lvm?

    but you need to decrypt both drive then. i think exist some script to decrypt two drive with same key but cant find.

    edit: btrfs raid superior because bitrot detection + healing, most normal raid can detect but not heal.

    • @Ooops@feddit.org
      link
      fedilink
      37 hours ago

      Decryption isn’t a problem if you use the systemd hooks when creating your initrams. They try to decrypt every given luks volume with the first key provided and only ask for additional keys if that fails.

      I have 3 disks in a btrfs raid setup, 4 partitions (1 for the raid setup on each, plus a swap partition on the biggest disk), all encrypted with the same password.

      No script needed, just add rd.luks.name=<UUID1>=cryptroot1 rd.luks.name=<UUID2>=cryptroot2 rd.luks.name=<UUID3>=cryptroot3 rd.luks.name=<UUID4>=cryptswap to your kernel parameters and unlock all 4 with one password at boot.

    • Björn TantauOP
      link
      fedilink
      112 hours ago

      I will be decrypting from a small busybox inside the initrd. I suspect that it will decrypt both drives if the passphrase is the same. At least that’s how it works on the desktop.

    • Björn TantauOP
      link
      fedilink
      213 hours ago

      Why not ZFS’s own encryption?

      Though I would rather go with BTRFS since I don’t have any experience with ZFS.